How to determine when AviSynth has an error message without seeing the video output - avisynth

Is is there a programmatic way to test for errors in Avisynth scripts before seeing the black and red Error message in the output.
We are currently assembling Avisynth script files as a part of an automated encoding routine. When something goes wrong with Avisynth or the source file, Avisynth renders a big black and red error message. Our encoder sees this as a normal video file and keeps on encoding without raising an error.
What is the best way to check for these errors without actually being seeing the output from the video file.

AviSynth has support for try-catch: http://avisynth.org/mediawiki/Control_structures#The_try..catch_statement
I'm not sure how you would signal an error to your encoder from there. As far as I know you must return a clip from the script, and a return statement inside a try/catch block does not return from the entire script anways: http://avisynth.org/mediawiki/The_full_AviSynth_grammar#Closing_Remarks
You can however log error messages to text files, so I've seen people doing this to test an AVS script for error before running it:
script = "file_to_test.avs"
try {
Import(script)
} catch (err) {
WriteFileStart(BlankClip(), "C:\logfile.txt", script, """ ": " """, err, append=true)
}

Related

exception handling in executable

I'm actually writing a programm to analyze excel data and creating an output file with the analyzed data. For reading the input files I use pandas wrapping a try/except block around to catch IOErrors if any of the input files is opened in the background, otherwise the program crashes. My program has to be converted to an exe. that other people can use it.
Now coming to the problem, if I start the program from the PyCharm IDE the exception handling works fine. But if I start the program as an exe. it always throws the exception even if the input files are closed.
Below my code, just a normal exception handling.
try:
dataset = pd.read_excel(path_cmd_data, engine='openpyxl')
pattern = pd.read_excel(path_cmd_regex, sheet_name='RegexTCM', engine='openpyxl')
production_dates = pd.read_excel(path_cmd_prod_dates, engine='openpyxl')
except IOError:
self.label3.setText('Reading Input Data failed!')
return
Thanks for your help, I'm a bit lost with this problem.

Prevent "t.expect(log).contains('string', 'message')" from printing log

I have a test that looks for ~100 separate substrings of a large log file, and fails if any of the strings is not present. Each time a not-present string is found, I emit a message saying which one. However, the log file is also put into the output log, and it is pretty large. How can I prevent it from being printed?
TestCafe does not allow removing an error message from a report. However, you can rewrite your assertion in the following way to hide the expected string:
const logContains = log.includes(myString);
await t.expect(logContains).ok(`The log file does not contain the following string: "..."`);

Printing variable in the server code

I have inherited some code that is written in Perl and makes HTTP requests between the server and the client. I want to print few variables that is in the server code but that raises errors when I use the print statement. The variables are scalars, arrays or hashes. I want to print the output to the terminal and only for debugging purposes. Few errors I get are-
malformed header from script 'get_config': Bad header: self=$VAR1 = bless( {
Response header name 'self=Bio' contains invalid characters, aborting request
A simple print 'test' raises error like
malformed header from script 'get_config': Bad header: test
How do I print the variable values without any errors?
You haven't explained yourself very well at all. But, from the errors you're getting, I assume this is a CGI program.
A CGI program sends its output to STDOUT, where the web server catches it and processes it in various ways. In order for this to work, the data that your program prints to STDOUT needs to follow various rules. Probably the most important of those rules is that the first output from your program must be the CGI headers - and at the least, those headers should include a Content-type: header.
I assume that you're trying to display your debugging output before your program has sent the CGI headers. That's not going to work.
But do you really want to send your debugging output to STDOUT? That seems like a bad idea. If you use warn() instead of print() then your output will go to STDERR instead - and in most web servers, STDERR is connected to the web server's error log.
For more control over the output generated by warn(), see the CGI::Carp module.

how to automatically break the lua program at the error line

I use my own interpreter to run the lua program and debug with zerobrane. If the interpreter encounters an error, how to let the debugger break at the error line?
There is no mechanism in Lua that allows to catch run-time errors. There was a call to debug.traceback function in Lua 5.1, but it's no longer called in Lua 5.2+. If you have your own error handling, you can call require("mobdebug").pause(), which will request ZeroBrane Studio debugger to stop on the next executable Lua line, which will allow you at least to see the stack trace and the location of the error, but this is probably all you can do. You can also try to assign debug.traceback to the function that calls pause, but, again, this will only work in Lua 5.1.
For example, try running the following script from the IDE:
require("mobdebug").start()
debug.traceback = function(...)
print("traceback", ...)
require("mobdebug").pause()
end
a()
print("done") -- it will never get here
If you save it into on-error.lua file and run, you should see the execution stopped on line 5 (after the pause()) call with the following message:
traceback on-error.lua:6: attempt to call global 'a' (a nil value) 2

Why don't I get output from Debug.log in an infinite loop in the Elm REPL?

I'm debugging some code with an infinite loop, but it's difficult, because I can't get any log messages out. Here's a simplified case:
import Debug exposing (log)
f x =
let _ = log "Hello, world!" ()
in f x
If I run this at my Elm REPL like f (), it infinitely loops and never prints out "Hello, world!" as I expect it to.
I looked at the implementation of Debug.log (following it to Native.Debug.log), but it just seems to be calling process.stdout.write or console.log synchronously, so I'm surprised that I'm not seeing any output.
This is just a bug in the Elm REPL.
The Problem
I dove into the implementation of the Elm REPL. The relevant function is here: Eval.Code.run
This run function seems to be the function that executes a piece of code. It looks like each line of code is executed in a subprocess, via Elm.Utils. unwrappedRun. There are two problems with the way it runs it:
The stdout of the subprocess is not streamed; it's only returned once the whole subprocess is done. So as long as you're waiting for your code to finish evaluation, you won't see anything.
If you hit ctrl-c to end evaluation prematurely (which works fine, and returns you back to the Elm prompt), the Elm Repl ignores the stdout that is returned to it. Notice the pattern match for CommandFailed:
Left (Utils.CommandFailed _out err) ->
throwError err
The Utils.CommandFailed result helpfully includes the stdout (which is being bound to _out), but this code ignores it and just throws the error.
So basically this isn't anything that weird going on with the Elm compiler or runtime, just that the REPL isn't as good as it could be with respect to logged results.
The Workaround
As a workaround, in order to debug things like infinite loops, you can
put some test code in a new file, Scratch.elm, like x = f ()
compile the code with elm-make Scratch.elm --output scratch.js
run the code with node scratch.js
Then output will be streamed to your terminal.