interpreting webkit layout tests - webkit

I have a task of breaking down webkit layout test results by passed/failed/skiped/crashed tests
I'm trying to comprehend what kind of "categories" to expect on the output.
I understood the NEW tests and the FAILED tests are shown in the results.txt but what about the rest?
These are the current categories I discovered so far
stderr
incorrect layout
succeeded
timed out
what else should I consider?

You can see the failure types in test_failures.py. They are
Test crashed
Test timed out
Expected output (whether image, audio, or text) is missing
Expected text output did not match actual output
Expected image hash (checksum) output did not match actual output
Expected image data (actual pixels) output did not match actual output
Expected image and text output did not match actual output
Expected audio output did not match actual output

Related

Asynchronous reading of an stdout

I've wrote this simple script, it generates one output line per second (generator.sh):
for i in {0..5}; do echo $i; sleep 1; done
The raku program will launch this script and will print the lines as soon as they appear:
my $proc = Proc::Async.new("sh", "generator.sh");
$proc.stdout.tap({ .print });
my $promise = $proc.start;
await $promise;
All works as expected: every second we see a new line. But let's rewrite generator in raku (generator.raku):
for 0..5 { .say; sleep 1 }
and change the first line of the program to this:
my $proc = Proc::Async.new("raku", "generator.raku");
Now something wrong: first we see first line of output ("0"), then a long pause, and finally we see all the remaining lines of the output.
I tried to grab output of the generators via script command:
script -c 'sh generator.sh' script-sh
script -c 'raku generator.raku' script-raku
And to analyze them in a hexadecimal editor, and it looks like they are the same: after each digit, bytes 0d and 0a follow.
Why is such a difference in working with seemingly identical generators? I need to understand this because I am going to launch an external program and process its output online.
Why is such a difference in working with seemingly identical generators?
First, with regard to the title, the issue is not about the reading side, but rather the writing side.
Raku's I/O implementation looks at whether STDOUT is attached to a TTY. If it is a TTY, any output is immediately written to the output handle. However, if it's not a TTY, then it will apply buffering, which results in a significant performance improvement but at the cost of the output being chunked by the buffer size.
If you change generator.raku to disable output buffering:
$*OUT.out-buffer = False; for 0..5 { .say; sleep 1 }
Then the output will be seen immediately.
I need to understand this because I am going to launch an external program and process its output online.
It'll only be an issue if the external program you launch also has such a buffering policy.
In addition to answer of #Jonathan Worthington. Although buffering is an issue of writing side, it is possible to cope with this on the reading side. stdbuf, unbuffer, script can be used on linux (see this discussion). On windows only winpty helps me, which I found here.
So, if there are winpty.exe, winpty-agent.exe, winpty.dll, msys-2.0.dll files in working directory, this code can be used to run program without buffering:
my $proc = Proc::Async.new(<winpty.exe -Xallow-non-tty -Xplain raku generator.raku>);

How to format xmllint xpath result output?

I have the problem that I get an XML from somewhere and am just interested in a subset of the information it contains and want to have the output readable, but I cannot get xmllint to format the result of an XPATH evaluation with more than one element coming out of the latter. See this minimal example:
<currentMeasurements>
<datetime>2022-08-18 00:00:00</datetime>
<sensor>
<type>temperature</type>
<name>Behind the garage</name>
<value>10.5</value>
</sensor>
<sensor>
<type>noise</type>
<name>In the classroom</name>
<value>POSITIVE_INFINITY</value>
</sensor>
<sensor>
<type>temperature</type>
<name>In the garage</name>
<value>11.0</value>
</sensor>
</currentMeasurements>
Of course, when I fetch that data from some server, it just gives me a single line:
<currentMeasurements><datetime>2022-08-18 00:00:00</datetime><sensor><type>temperature</type><name>Behind the garage</name><value>10.5</value></sensor><sensor><type>noise</type><name>In the classroom</name><value>POSITIVE_INFINITY</value></sensor><sensor><type>temperature</type><name>In the garage</name><value>11.0</value></sensor></currentMeasurements>
Prettyprinting it on the commandline is easy (the following commands assume that the long line has been copied to clipboard and is accessible as /dev/clipboard):
cat /dev/clipboard | xmllint --format -
That gives me a formatted string (like the minimal example above). But I want to only have a subset of all data, which I can do with an XPATH expression. For example, if I am not interested in any noise but only temperatures, I can do this:
cat /dev/clipboard | xmllint --xpath "//type[text() = 'temperature']/.." -
This works, however, it doesn't format the output, which makes the result unreadable (especially when the data gets un-minimal of course):
<sensor><type>temperature</type><name>Behind the garage</name><value>10.5</value></sensor>
<sensor><type>temperature</type><name>In the garage</name><value>11.0</value></sensor>
Even when providing --format as well, the output is not formatted. And when I pipe the result into another xmllint --format -, it complains that there is Extra content at the end of the document – of course, the XPATH result does not have one single root.
So my question could be translated into any question like
How can I format the result of an XPATH evaluation with xmllint?
How can I format XML input with "more than one root" with xmllint?
How can I wrap a root node around an XPATH evaluation result?
My only solution up to now is to wrap the xmllint call into a subshell and add printf statements around, but I think it can be done more elegantly:
cat /dev/clipboard | (printf "<myNewRoot>"; xmllint --format --xpath "//type[text() = 'temperature']/.." - ; printf "</myNewRoot>") | xmllint --format -

ffmpeg - How to change the filter-paramaters depending on time oder framenumber?

Hallo to all userse and helpers here in this forum! Thank you, i am new and i have found allready a lot of solutions.
Now I want to ask, if someone can help me:
I have a Movie about 30 seconds made of 1 image.
Now I want to pixelate depending on time or framenumber - every time a litlle bit less.
My code so far:
ffmpeg -i in.mp4 -vf scale=iw/n:ih/n,scale=niw:nih:flags=neighbor out.mp4
where n should be the framenumber 1 to 900.
this scould also be t+1 for slower change.
the stars are gone - so i mean n times iw:n times ih:
error-massage:
undefined constant or missing '(' in 'n'
error when evaluating the expression 'ih/n'
maybe the expression for out_w:'w/n' or for out_h:'ih/n' is self-referencing.
failed to configure output pad on paresed_scale_0
error reinitializing filters!
failed to inject frame into filter network: invalid argument
error while processing the decoded data for stream #0:0
Do you have some suggestion plaese - Thank you in Advance

Updating Values on json file A using reference on file B - The return

Ok, i should feel ashamed for that, but i'm unable to understand how awk works...
A few days ago i posted this question which questions about how to replace fields on file A using the file B as a reference ( both files have matching ID's for reference ).
But after accepting the answer as correct ( Thanks Ed !) i'm struggling about how to do it using this following pattern:
File A
{"test_ref":32132112321,"test_id":12345,"test_name":"","test_comm":"test", "null_test": "true"}
{"test_ref":32133321321,"test_id":12346,"test_name":"","test_comm":"test", "test_type": "alfa"}
{"test_ref":32132331321,"test_id":12347,"test_name":"","test_comm":"test", "test_val": 1923}
File B
{"test_id": 12345, "test_name": "Test values for null"}
{"test_id": 12346, "test_name": "alfa tests initiated"}
{"test_id": 12347, "test_name": "discard values"}
Expected result:
{"test_ref":32132112321,"test_id":12345,"test_name":"Test values for null","test_comm":"test", "null_test": "true"}
{"test_ref":32133321321,"test_id":12346,"test_name":"alfa tests initiated","test_comm":"test", "test_type": "alfa"}
{"test_ref":32132331321,"test_id":12347,"test_name":"discard values","test_comm":"test", "test_val": 1923}
I tried some alterations with the original solution but without success. So, Based on the Question posted before, how could i achieve the same results with this new pattern?
PS: One important note, the lines on file A not always have the same length
Big Thanks in advance.
EDIT:
After trying the solution posted by Wintermute, it seens it doens't work with lines having:
{"test_ref":32132112321,"test_id":12345,"test_name":"","test_comm":"test", "null_test": "true","modifiers":[{"type":3,"value":31}{"type":4,"value":33}]}
Error received.
error: parse error: Expected separator between values at line xxx, column xxx
Parsing JSON with awk or sed is not a good idea for the same reasons that it's not a good idea to parse XML with them: sed works based on lines, and JSON is not line-based. awk works on vaguely tabular data, and JSON is not vaguely tabular. People don't expect their JSON tools to break when they insert newlines in benign places.
Instead, consider using a tool geared towards JSON processing, such as jq. In this particular case, you could use
jq -c -s 'group_by(.test_id) | map(.[0] + .[1]) | .[]' a.json b.json > c.json
Here jq slurps (-s) the input files into an array of JSON objects, groups these by test_id, merges them and unpacks the array. -c means compact output format, so each JSON object in the result ends up on a single line in the output.

AMPL:How to print variable output using NEOS Server, when you can't include data and model command in the command file?

I'm doing some optimization using a model whose number of constraints and variables exceeds the cap for the student version of, say, AMPL, so I've found a webpage [http://www.neos-server.org/neos/solvers/milp:Gurobi/AMPL.html] which can solve my type of model.
I've found however that when using a solver where you can provide a commandfile (which I assume is the same as a .run file) the documentation of NEOS server tells that you should see the documentation of the input file. I'm using AMPL input which according to [http://www.neos-guide.org/content/FAQ#ampl_variables] should be able to print the decision variables using a command file with the appearance:
solve;
display _varname, _var;
The problem is that NEOS claim that you cannot add the:
data datafile;
model modelfile;
commands into the .run file, resulting in that the compiler cannot find the variables.
Does anyone know of a way to work around this?
Thanks in advance!
EDIT: If anyone else has this problem (which I believe many people have based on my Internet search). Try to remove any eventual reset; command from the .run file!
You don't need to specify model or data commands in the script file submitted to NEOS. It loads the model and data files automatically, solves the problem, and then executes the script (command file) you provide. For example submitting diet1.mod model diet1.dat data and this trivial command file
display _varname, _var;
produces the output which includes
: _varname _var :=
1 "Buy['Quarter Pounder w/ Cheese']" 0
2 "Buy['McLean Deluxe w/ Cheese']" 0
3 "Buy['Big Mac']" 0
4 "Buy['Filet-O-Fish']" 0
5 "Buy['McGrilled Chicken']" 0
6 "Buy['Fries, small']" 0
7 "Buy['Sausage McMuffin']" 0
8 "Buy['1% Lowfat Milk']" 0
9 "Buy['Orange Juice']" 0
;
As you can see this is the output from the display command.