Logical NOT operation in JVM - jvm

I'm trying to mimic the behavior of a NOT gate using Jasmin. The behavior is as follows:
Pop an integer off the stack
if the integer is 0, push 1 back onto the stack
else push 0 back onto the stack
I've tried two different attempts at this but to no avail.
Attempt 1:
...(other code1)
ifeq 3 ; if the top of stack is 0, jump 3 lines down to "i_const1"
i_const0 ; top of stack was not 0, so we push 0
goto 2 ; jump 2 lines down to first line of (other code2)
i_const1
...(other code2)
Of course, the above example doesn't work because ifeq <offset> takes in a Label rather than a hard-coded integer as its offset. Is there a similar operation to ifeq that does accept integers as a parameter?
Attempt 2:
...
ifeq Zero ; top of stack is 0, so jump to Zero
i_const0 ; top of stack was 1 or greater, so we push 0
...
... (some code in between)
...
ifeq Zero ; top of stack is 0, so jump to Zero
i_const0 ; top of stack was 1 or greater, so we push 0
...
Zero:
i_const1 ; top of stack was 0, so push 1 to stack
goto <???> ; How do I know which "ifeq Zero" called this label?
The problem with this is that I've more than one place in my code making use of the NOT operation. I tried using ifeq with labels, but after I'm done how do I know which line to return to using goto? Is there a way to dynamically determine which "ifeq Zero" made the jump?
Any insight would be greatly appreciated.

Is there a similar operation to ifeq that does accept integers as a parameter?
Yes, you can specify relative offsets using $ sign.
BUT the relative offsets are counted in bytes, not in lines.
ifeq $+7 ; 0: jump +7 bytecodes forward from this instruction
iconst_0 ; +3
goto $+4 ; +4
iconst_1 ; +7
# ... ; +8
Is there a way to dynamically determine which "ifeq Zero" made the jump?
No. Use multiple different labels instead of single Zero.
Well, there is actually a pair of bytecodes (jsr/ret) that support dynamic return address. But these bytecodes are deprecated and are not supported in Java 6+ class files.

Related

Why fcntl start=0, len=0, whence=2 works?

According to the fcntl manual, fcntl locking with start=0, len=0, whence=2 should lock the byte range starting from the end of file (whence=2), with offset 0 (start=0) until the end of file (len=0), which in my mind means locking in total 0 bytes from EOF to EOF.
In this case I would expect that locking with those arguments would not lock anything. However, if I try (using python wrapper fcntl), the following code does lock something, and the second copy is waiting for the first to finish:
f = open('some_file', 'a+')
fcntl.lockf(f, fcntl.LOCK_EX, 0, 0, 2)
print('got the lock')
time.sleep(100)
Similarly, the code with parameters whence=2, offset=100, len=0 also works, even those in this case the byte range is backwards [EOF + 100, EOF].
What am I locking?
I did some tests, and the answer seems to be as follows - whence=2 does not lock until the EOF, but until the infinity, which is not how I would read the description in the man page:
Specifying 0 for l_len has the
special meaning: lock all bytes starting at the location
specified by l_whence and l_start through to the end of file, no
matter how large the file grows.
😞

Reset behavior with miter equivalence checking

I'm trying to prove equivalence using miter and sat for a sequential circuit. Essentially, the behavior of the two circuits should be identical as soon as they are reset. I cannot figure out how to tell yosys this though. I have tried reseting the designs with -set in_reset 0 -set-at 0 in_reset 1. Here is an example circuit (a shift register) and yosys script that illustrate what I'm trying to do:
module shift_reg(
input clock,
input reset,
input in,
output out
);
reg [7:0] r;
assign out = r[7];
always #(posedge clock) begin
if (reset)
r <= 0;
else
r <= {r[6:0], in};
end
endmodule
My Yosys commands:
read_verilog shift_reg.v
rename shift_reg shift_reg_2
read_verilog shift_reg.v
prep; proc; opt; memory
miter -equiv -flatten shift_reg shift_reg_2 miter
hierarchy -top miter
sat -verify -tempinduct -prove trigger 0 -set in_reset 0 -set-at 0 in_reset 1 -seq 0 miter
If I add -set-init-zero it works, but that defeats the purpose because I'm trying to test reset behavior. I can also change -seq 0 to -seq 8 but that also defeats the purpose because I'm trying to check that the circuits are equivalent as soon as they are reset.
How do I tell equivalence checking to reset the circuits before checking?
Simply use the following SAT command:
sat -verify -tempinduct -prove trigger 0 \
-set in_reset 0 -set-at 1 in_reset 1 -seq 1 miter
Changes compared to your script:
The "sat" command numbers steps starting with 1. So the -set-at option needs 1 as first argument for resetting in the first cycle.
The "-seq 1" will disable checking of properties in the first cycle. This make sense because the reset will only have taken effect in the 2nd cycle, so the two modules might indeed produce different outputs in cycle 1.

Ghostscript for PS integrity test: terminate at EOF, return error unless stack is empty

To test the integrity of PostScript files, I'd like to run Ghostscript in the following way:
Return 1 (or other error code) on error
Return 0 (success) at EOF if stack is empty
Return 1 (or other error code) otherwise
I could run gs in the background, and use a timeout to force termination if gs hangs with items left on the stack. Is there an easier solution?
Ghostscript won't hang if you send files as input (unless you write a program which enters an infinite loop or otherwise fails to reach a halting state). Having items on any of the stacks won't cause it to hang.
On the other hand, it won't give you an error if a PostScript program leaves operands on the operand stack (or dictionaries on the dictionary stack, clips on the clip stack or gstates on the graphics state stack). This is because that's not an error, and since PostScript interpreters normally run in a job server loop its not a problem either. Terminating the job returns control to the job server loop which does a save and restore round the total job, thereby clearing up anything left behind.
I'd suggest that if you really want to do this you need to adopt the same approach, you need to write a PostScript program which executes the PostScript program you want to 'test', then checks the operand stack (and other stacks if required) to see if anything is left. Note that you will want to execute the test program in a stopped context, as an error in the course of the program will clearly potentially leave stuff lying around.
Ghostscript returns 0 on a clean exit and a value less than 0 for errors, if I remember correctly. You would need to use signalerror in your test framework in order to raise an error if items are left at the end of a program.
[EDIT]
Anything supplied to Ghostscript on the command line by either -s or -d is defined in systemdict, so if we do -sInputFileName=/test.pdf then we will find in systemdict a key /InputFileName whose value is a string with the contents (/test.pdf). We can use that to pass the filename to our program.
The stopped operator takes an executable array as an argument, and returns either true or false depending on whether an error occurred while executing the array (3rd Edition PLRM, p 697).
So we need to run the program contained in the filename we've been given, and do it in a 'stopped' context. Something like this:
{InputFileName run} stopped
{
(Error occurred\n) print flush
%% Potentially check $error for more information.
}{
(program terminated normally\n) print flush
%% Here you could check the various stacks
} ifelse
The following, based 90% on KenS's answer, is 99% satisfactory:
Program checkIntegrity.ps:
{Script run} stopped
{
(\n===> Integrity test failed: ) print Script print ( has error\n\n) print
handleerror
(ignore this error which only serves to force a return value of 1) /syntaxerror signalerror
}{
% script passed, now check the stack
count dup 0 eq {
pop (\n===> Integrity test passed: ) print Script print ( terminated normally\n\n) print
} {
(\n===> Integrity test failed: ) print Script print ( left ) print
3 string cvs print ( item(s) on stack\n\n) print
Script /syntaxerror signalerror
} ifelse
} ifelse
quit
Execute with
gs -q -sScript=CodeToBeChecked.ps checkIntegrity.ps ; echo $?
For the last 1% of satisfaction I would need a replacement for
(blabla) /syntaxerror signalerror
It forces exit with return code 1, but is very verbous and distracts from the actual error in the checked script that is reported by handleerror. Therefore a cleaner way to exit(1) would be welcome.

while [[ condition ]] stalls on loop exit

I have a problem with ksh in that a while loop is failing to obey the "while" condition. I should add now that this is ksh88 on my client's Solaris box. (That's a separate problem that can't be addressed in this forum. ;) I have seen Lance's question and some similar but none that I have found seem to address this. (Disclaimer: NO I haven't looked at every ksh question in this forum)
Here's a very cut down piece of code that replicates the problem:
1 #!/usr/bin/ksh
2 #
3 go=1
4 set -x
5 tail -0f loop-test.txt | while [[ $go -eq 1 ]]
6 do
7 read lbuff
8 set $lbuff
9 nwords=$#
10 printf "Line has %d words <%s>\n" $nwords "${lbuff}"
11 if [[ "${lbuff}" = "0" ]]
12 then
13 printf "Line consists of %s; time to absquatulate\n" $lbuff
14 go=0 # Violate the WHILE condition to get out of loop
15 fi
16 done
17 printf "\nLooks like I've fallen out of the loop\n"
18 exit 0
The way I test this is:
Run loop-test.sh in background mode
In a different window I run commands like "echo some nonsense >>loop_test.txt" (w/o the quotes, of course)
When I wish to exit, I type "echo 0 >>loop-test.txt"
What happens? It indeed sets go=0 and displays the line:
Line consists of 0; time to absquatulate
but does not exit the loop. To break out I append one more line to the txt file. The loop does NOT process that line and just falls out of the loop, issuing that "fallen out" message before exiting.
What's going on with this? I don't want to use "break" because in the actual script, the loop is monitoring the log of a database engine and the flag is set when it sees messages that the engine is shutting down. The actual script must still process those final lines before exiting.
Open to ideas, anyone?
Thanks much!
-- J.
OK, that flopped pretty quick. After reading a few other posts, I found an answer given by dogbane that sidesteps my entire pipe-to-while scheme. His is the second answer to a question (from 2013) where I see neeraj is using the same scheme I'm using.
What was wrong? The pipe-to-while has always worked for input that will end, like a file or a command with a distinct end to its output. However, from a tail command, there is no distinct EOF. Hence, the while-in-a-subshell doesn't know when to terminate.
Dogbane's solution: Don't use a pipe. Applying his logic to my situation, the basic loop is:
while read line
do
# put loop body here
done < <(tail -0f ${logfile})
No subshell, no problem.
Caveat about that syntax: There must be a space between the two < operators; otherwise it looks like a HEREIS document with bad syntax.
Er, one more catch: The syntax did not work in ksh, not even in the mksh (under cygwin) which emulates ksh93. But it did work in bash. So my boss is gonna have a good laugh at me, 'cause he knows I dislike bash.
So thanks MUCH, dogbane.
-- J
After articulating the problem and sleeping on it, the reason for the described behavior came to me: After setting go=0, the control flow of the loop still depends on another line of data coming in from STDIN via that pipe.
And now that I have realized the cause of the weirdness, I can speculate on an alternative way of reading from the stream. For the moment I am thinking of the following solution:
Open the input file as STDIN (Need to research the exec syntax for that)
When the condition occurs, close STDIN (Again, need to research the syntax for that)
It should then be safe to use the more intuitive:while read lbuffat the top of the loop.
I'll test this out today and post the result. I'd hope someone else benefit from the method (if it works).

Why isn't g++ tail call optimizing while gcc is?

I wanted to check whether g++ supports tail calling so I wrote this simple program to check it: http://ideone.com/hnXHv
using namespace std;
size_t st;
void PrintStackTop(const std::string &type)
{
int stack_top;
if(st == 0) st = (size_t) &stack_top;
cout << "In " << type << " call version, the stack top is: " << (st - (size_t) &stack_top) << endl;
}
int TailCallFactorial(int n, int a = 1)
{
PrintStackTop("tail");
if(n < 2)
return a;
return TailCallFactorial(n - 1, n * a);
}
int NormalCallFactorial(int n)
{
PrintStackTop("normal");
if(n < 2)
return 1;
return NormalCallFactorial(n - 1) * n;
}
int main(int argc, char *argv[])
{
st = 0;
cout << TailCallFactorial(5) << endl;
st = 0;
cout << NormalCallFactorial(5) << endl;
return 0;
}
When I compiled it normally it seems g++ doesn't really notice any difference between the two versions:
> g++ main.cpp -o TailCall
> ./TailCall
In tail call version, the stack top is: 0
In tail call version, the stack top is: 48
In tail call version, the stack top is: 96
In tail call version, the stack top is: 144
In tail call version, the stack top is: 192
120
In normal call version, the stack top is: 0
In normal call version, the stack top is: 48
In normal call version, the stack top is: 96
In normal call version, the stack top is: 144
In normal call version, the stack top is: 192
120
The stack difference is 48 in both of them, while the tail call version needs one more
int. (Why?)
So I thought optimization might be handy:
> g++ -O2 main.cpp -o TailCall
> ./TailCall
In tail call version, the stack top is: 0
In tail call version, the stack top is: 80
In tail call version, the stack top is: 160
In tail call version, the stack top is: 240
In tail call version, the stack top is: 320
120
In normal call version, the stack top is: 0
In normal call version, the stack top is: 64
In normal call version, the stack top is: 128
In normal call version, the stack top is: 192
In normal call version, the stack top is: 256
120
The stack size increased in both cases, and while the compiler might think my CPU is slower than my memory (which its not anyway), I don't know why 80 bytes are necessary for a simple function. (Why is it?).
There tail call version also takes more space than the normal version, and its completely logical if an int has the size of 16 bytes. (no, I don't own a 128 bit CPU).
Now thinking what reason the compiler has not to tail call, I thought it might be exceptions, because they depend on the stack tightly. So I tried without exceptions:
> g++ -O2 -fno-exceptions main.cpp -o TailCall
> ./TailCall
In tail call version, the stack top is: 0
In tail call version, the stack top is: 64
In tail call version, the stack top is: 128
In tail call version, the stack top is: 192
In tail call version, the stack top is: 256
120
In normal call version, the stack top is: 0
In normal call version, the stack top is: 48
In normal call version, the stack top is: 96
In normal call version, the stack top is: 144
In normal call version, the stack top is: 192
120
Which cut the normal version back to non-optimized stack size, while the optimized one has 8 bytes over it. still an int is not 8 bytes.
I thought there is something I missed in c++ that needs the stack arranged so I tried c: http://ideone.com/tJPpc
Still no tail calling, but the stack is much smaller (32 bit each frame in both version).
Then I tried with optimization:
> gcc -O2 main.c -o TailCall
> ./TailCall
In tail call version, the stack top is: 0
In tail call version, the stack top is: 0
In tail call version, the stack top is: 0
In tail call version, the stack top is: 0
In tail call version, the stack top is: 0
120
In normal call version, the stack top is: 0
In normal call version, the stack top is: 0
In normal call version, the stack top is: 0
In normal call version, the stack top is: 0
In normal call version, the stack top is: 0
120
Not only it tail call optimized the first, it also tail call optimized the second!
Why doesn't g++ do tail call optimization while its clearly available in the platform? is there any way to force it?
Because you're passing a temporary std::string object to the PrintStackTop(std::string) function. This object is allocated on the stack and thus prevent the tail call optimization.
I modified your code:
void PrintStackTopStr(char const*const type)
{
int stack_top;
if(st == 0) st = (size_t) &stack_top;
cout << "In " << type << " call version, the stack top is: " << (st - (size_t) &stack_top) << endl;
}
int RealTailCallFactorial(int n, int a = 1)
{
PrintStackTopStr("tail");
if(n < 2)
return a;
return RealTailCallFactorial(n - 1, n * a);
}
Compile with: g++ -O2 -fno-exceptions -o tailcall tailcall.cpp
And it now uses the tail call optimisation. You can see it in action if you use the -S flag to produce the assembly:
L39:
imull %ebx, %esi
subl $1, %ebx
L38:
movl $LC2, (%esp)
call __Z16PrintStackTopStrPKc
cmpl $1, %ebx
jg L39
You see the recursive call inlined as a loop (jg L39).
I don't find the other answer satisfying because a local object has no effect on the stack once it's gone.
Here is a good article which mentions that the lifetime of local objects extends into the tail-called function. Tail call optimization requires destroying locals before relinquishing control, GCC will not apply it unless it is sure that no local object will be accessed by the tail call.
Lifetime analysis is hard though, and it looks like it's being done too conservatively. Setting a global pointer to reference a local disables TCO even if the local's lifetime (scope) ends before the tail call.
{
int x;
static int * p;
p = & x;
} // x is dead here, but the enclosing function still has TCO disabled.
This still doesn't seem to model what's happening, so I found another bug. Passing local to a parameter with a user-defined or non-trivial destructor also disables TCO. (Defining the destructor = delete allows TCO.)
std::string has a nontrivial destructor, so that's causing the issue here.
The workaround is to do these things in a nested function call, because lifetime analysis will then be able to tell that the object is dead by the tail call. But there's no need to forgo all C++ objects.
The original code with temporary std::string object is still tail recursive, since the destructor for that object is executed immediately after exit from PrintStackTop("");, so nothing should be executed after the recursive return statement.
However, there are two issues that lead to confusion of tail call optimization (TCO):
the argument is passed by reference to the PrintStackTop function
non-trivial destructor of std::string
It can be verified by custom class that each of those two issues is able to break TCO.
As it is noted in the previous answer by #Potatoswatter there is a workaround for both of those issues. It is enough to wrap call of PrintStackTop by another function to help the compiler to perform TCO even with temporary std::string:
void PrintStackTopTail()
{
PrintStackTop("tail");
}
int TailCallFactorial(int n, int a = 1)
{
PrintStackTopTail();
//...
}
Note that is not enough to limit the scope by enclosing { PrintStackTop("tail"); } in curly braces. It must be enclosed as a separate function.
Now it can be verified with g++ version 4.7.2 (compilation options -O2) that tail recursion is replaced by a loop.
The similar issue is observed in Pass-by-reference hinders gcc from tail call elimination
Note that printing (st - (size_t) &stack_top) is not enough to be sure that TCO is performed, for example with the optimization option -O3 the function TailCallFactorial is self inlined five times, so TailCallFactorial(5) is executed as a single function call, but the issue is revealed for larger argument values (for example for TailCallFactorial(15);). So, the TCO may be verified by reviewing assembly output generated with the -S flag.