Expect scripts cannot match - scripting

I am using the Community Edition of ActiveTcl from ActiveState on Windows with the Expect package installed. I have tried writing my own scripts, downloading some from various websites, and even copying and pasting them from the ActiveState website itself, but I always run into the same problem. My scripts can send commands perfectly and configure network devices as expected, but only by sandwiching sends with sleep periods. Whenever I try matching anything with expect, I always get the same thing (when using exp_internal 1): expect: does "" (spawn_id exp4) match glob pattern "AnyString"? no. And the same thing for regular expressions: expect: does "" (spawn_id exp4) match regular expression "AnyString"? no. The only time it seems to work is with a single wildcard: expect: does "" (spawn_id exp4) match glob pattern "*"? yes
expect: set expect_out(0,string) ""
expect: set expect_out(spawn_id) "exp4"
expect: set expect_out(buffer) ""
But no other combination of wildcards, literal or regex, seem to work. I have watched videos and seen screenshots. From what I can tell expect should iterate each character until a match is found, but it literally just stops at "", times out, and continues on to the next line. I am sure I am doing something obviously fundamentally wrong if the expect command does not work in Expect, but I just don't know what. It's as if it cannot read any of terminal output, but the send commands work perfectly so I know it's connected. Whatever terminal program I am trying (telnet, plink, netcat, etc.) all have the same problem. Expect really seems to be an awesome automation tool, so I'd really appreciate any suggestions that might help me get over this. I am fully expecting to feel stupid after receiving the answer.

This is probably due to known issues with Expect on later versions of Windows, which are unfortunately poorly documented. Expect may work on Win7 or Win8, probably not on Win10. It will generally work better on 32-bit Windows than 64-bit Windows.

Related

How to actually use debugging commands in Chatscript?

I've been using Chatscript to make a bot (12.2 on Windows, using the command line). I'd like to use debugging commands such as :why and :redo as described in the documentation but almost none of them work. (https://github.com/ChatScript/ChatScript/blob/master/WIKI/ChatScript-Debugging-Manual.md)
:commands makes the bot say nothing.
rob: > :commands
HARRY:
:why does the same.
:redo makes the program silently crash.
There are only a few that seem to work as intended, such as :prepare. Has anyone been able to get debugging commands to work?

Comma says "test file died" while zef test . passes

In some cases, and I'm not sure why, comma indicates "test file died" while, apparently, all tests have passed and zef does not indicate any problem. Happens for instance in this file where the only only quirk seems to be that plan * is being used.
However, while all tests that fail have that in common, it does not seem to be the only reason, since others with the same plan do work. Any idea

.NET Dir$ Function on Network Path

I have a network path that contains hundred of thousands .wav files. When I do the following:
FileBuffer = Dir$("\\MEDIASERVER\*.wav", FileAttribute.Archive)
The line freezes forever. I have literally let it run a day, and it never returns with execution. I then decided to test the symptom with a dir command in DOS. Same symptom.
I then wondered if I would get the same symptom if I added a prefix to the search pattern narrowing my results. I did this in DOS:
DIR 0009*.wav
Worked like a charm. So, armed with this knowledge, I went back to my VB.NET project and applied a similar solution:
FileBuffer = Dir$("\\MEDIASERVER\0009*.wav", FileAttribute.Archive)
Doesn't get stuck, actually does the search. But I was surprised by the first result:
FileBuffer came back with the following value:
003925034541228334146804222014065036AM005020MIF.wav
This does not match the pattern I asked for. Can someone tell me what I am doing wrong? Is there a known bug with DIR$? Is there a way to achieve what I want without enumerating 100% of the files in the network share?
Additional Information if it's relevant:
Developement Machine: Windows 7 Pro, VS 2013 Pro
Network Server: Linux Centos 5.0 (I have the same issue with a network drive running Windows 7 Pro).
Thanks in advance.
Sorry for just posting the question, and already getting the answer. The question about whether or not it worked properly in DOS helped narrow my troubleshooting. I came across a couple of pages on the net stating that DIR will come back with unexpected results. They state that it's because of the 8.3 naming convention. I then decided to see if there was any other constraints I could add to my program. I changed the pattern to:
DIR \\MEDIASERVER\0009*MIF.wav
And now I am getting the expected results. This cannot be because of the 8.3 Naming convention though. It's something else, but at least I got this working.
Thanks for your time.

Workaround for enoent error from Erlang's Common Test on Windows?

When using Common Test with Erlang on Windows, I run into a lot of bugs with Common Test and Erlang. For one, if there are any spaces in the project's path, Common Test often fails outright. To workaround this, I moved the project to a path with no spaces (but I really wish the devs would fix the libraries so they work better on Windows). Now, I got Common Test to mostly run, except it won't print out the HTML report at the end. This is the error I get after the tests run:
Testing myapp.ebin: EXIT, reason {
{badmatch,{error,enoent}},
[{test_server_ctrl,start_minor_log_file1,4,
[{file,"test_server_ctrl.erl"},{line,1959}]},
{test_server_ctrl,run_test_case1,11,
[{file,"test_server_ctrl.erl"},{line,3761}]},
{test_server_ctrl,run_test_cases_loop,5,
[{file,"test_server_ctrl.erl"},{line,3032}]},
{test_server_ctrl,run_test_cases,3,
[{file,"test_server_ctrl.erl"},{line,2294}]},
{test_server_ctrl,ts_tc,3,
[{file,"test_server_ctrl.erl"},{line,1434}]},
{test_server_ctrl,init_tester,9,
[{file,"test_server_ctrl.erl"},
{line,1401}]}]}
This happened in sometimes in Erlang R15 and older if the test function names were either too long or had too many underscores in the name (which I suspect is also a bug) or when too many tests failed (which means Common Test is useless to me for TDD). But now it happens on every ct:run from Common Test in R15B01. Does anyone know how I can workaround this? Has anyone had any success with TDD and Common Test on Windows?
Given the last comment you might want to disable the buildin_hooks. You can do this by passing the following to ct:run/1 or ct_run
{enable_builtin_hooks,false}
That should disable the cth_log_redirect hook and maybe solve your problem during overload.

How would one go about testing an interpreter or a compiler?

I've been experimenting with creating an interpreter for Brainfuck, and while quite simple to make and get up and running, part of me wants to be able to run tests against it. I can't seem to fathom how many tests one might have to write to test all the possible instruction combinations to ensure that the implementation is proper.
Obviously, with Brainfuck, the instruction set is small, but I can't help but think that as more instructions are added, your test code would grow exponentially. More so than your typical tests at any rate.
Now, I'm about as newbie as you can get in terms of writing compilers and interpreters, so my assumptions could very well be way off base.
Basically, where do you even begin with testing on something like this?
Testing a compiler is a little different from testing some other kinds of apps, because it's OK for the compiler to produce different assembly-code versions of a program as long as they all do the right thing. However, if you're just testing an interpreter, it's pretty much the same as any other text-based application. Here is a Unix-centric view:
You will want to build up a regression test suite. Each test should have
Source code you will interpret, say test001.bf
Standard input to the program you will interpret, say test001.0
What you expect the interpreter to produce on standard output, say test001.1
What you expect the interpreter to produce on standard error, say test001.2 (you care about standard error because you want to test your interpreter's error messages)
You will need a "run test" script that does something like the following
function fail {
echo "Unexpected differences on $1:"
diff $2 $3
exit 1
}
for testname
do
tmp1=$(tempfile)
tmp2=$(tempfile)
brainfuck $testname.bf < $testname.0 > $tmp1 2> $tmp2
[ cmp -s $testname.1 $tmp1 ] || fail "stdout" $testname.1 $tmp1
[ cmp -s $testname.2 $tmp2 ] || fail "stderr" $testname.2 $tmp2
done
You will find it helpful to have a "create test" script that does something like
brainfuck $testname.bf < $testname.0 > $testname.1 2> $testname.2
You run this only when you're totally confident that the interpreter works for that case.
You keep your test suite under source control.
It's convenient to embellish your test script so you can leave out files that are expected to be empty.
Any time anything changes, you re-run all the tests. You probably also re-run them all nightly via a cron job.
Finally, you want to add enough tests to get good test coverage of your compiler's source code. The quality of coverage tools varies widely, but GNU Gcov is an adequate coverage tool.
Good luck with your interpreter! If you want to see a lovingly crafted but not very well documented testing infrastructure, go look at the test2 directory for the Quick C-- compiler.
I don't think there's anything 'special' about testing a compiler; in a sense it's almost easier than testing some programs, since a compiler has such a basic high-level summary - you hand in source, it gives you back (possibly) compiled code and (possibly) a set of diagnostic messages.
Like any complex software entity, there will be many code paths, but since it's all very data-oriented (text in, text and bytes out) it's straightforward to author tests.
I’ve written an article on compiler testing, the original conclusion of which (slightly toned down for publication) was: It’s morally wrong to reinvent the wheel. Unless you already know all about the preexisting solutions and have a very good reason for ignoring them, you should start by looking at the tools that already exist. The easiest place to start is Gnu C Torture, but bear in mind that it’s based on Deja Gnu, which has, shall we say, issues. (It took me six attempts even to get the maintainer to allow a critical bug report about the Hello World example onto the mailing list.)
I’ll immodestly suggest that you look at the following as a starting place for tools to investigate:
Software: Practice and Experience April 2007. (Payware, not available to the general public---free preprint at http://pobox.com/~flash/Practical_Testing_of_C99.pdf.
http://en.wikipedia.org/wiki/Compiler_correctness#Testing (Largely written by me.)
Compiler testing bibliography (Please let me know of any updates I’ve missed.)
In the case of brainfuck, I think testing it should be done with brainfuck scripts. I would test the following, though:
1: Are all the cells initialized to 0
2: What happens when you decrement the data pointer when it's currently pointing to the first cell? Does it wrap? Does it point to invalid memory?
3: What happens when you increment the data pointer when it's pointing at the last cell? Does it wrap? Does it point to invalid memory
4: Does output function correctly
5: Does input function correctly
6: Does the [ ] stuff work correctly
7: What happens when you increment a byte more than 255 times, does it wrap to 0 properly, or is it incorrectly treated as an integer or other value.
More tests are possible too, but this is probably where i'd start. I wrote a BF compiler a few years ago, and that had a few extra tests. Particularly I tested the [ ] stuff heavily, by having a lot of code inside the block, since an early version of my code generator had issues there (on x86 using a jxx I had issues when the block produced more than 128 bytes or so of code, resulting in invalid x86 asm).
You can test with some already written apps.
The secret is to:
Separate the concerns
Observe the law of Demeter
Inject your dependencies
Well, software that is hard to test is a sign that the developer wrote it like it's 1985. Sorry to say that, but utilizing the three principles I presented here, even line numbered BASIC would be unit testable (it IS possible to inject dependencies into BASIC, because you can do "goto variable".