What does it mean to 'trace' something in Java? - definition

My Java textbook is exploring for loops and talks about 'tracing' for loops, but does not give a clear definition. Does it simply mean breaking down the code?

Tracing is just monitoring the value of some variable while the loop is being executed.
For example, If you have a loop which reduces the value of x by 1, and if the initial value of x is 5 - then you're tracing the value of x from 5 to 4 to 3 to 2 to 1 :)
Cheers!

Related

How to build a decompiler for Lua 5.1?

I am building a decompiler for Lua 5.1. (for study purposes only)
Here is the generated code:
main <test.lua:0,0> (12 instructions, 48 bytes at 008D0520)
0+ params, 2 slots, 0 upvalues, 0 locals, 6 constants, 0 functions
1 [1] LOADK 0 -2 ; 2
2 [1] SETGLOBAL 0 -1 ; plz_help_me
3 [2] LOADK 0 -4 ; 24
4 [2] SETGLOBAL 0 -3 ; oh_no
5 [3] GETGLOBAL 0 -1 ; plz_help_me
6 [3] GETGLOBAL 1 -3 ; oh_no
7 [3] ADD 0 0 1
8 [3] SETGLOBAL 0 -5 ; plz_work
9 [4] GETGLOBAL 0 -6 ; print
10 [4] GETGLOBAL 1 -5 ; plz_work
11 [4] CALL 0 2 1
12 [4] RETURN 0 1
constants (6) for 008D0520:
1 "plz_help_me"
2 2
3 "oh_no"
4 24
5 "plz_work"
6 "print"
locals (0) for 008D0520:
upvalues (0) for 008D0520:
Original Code:
plz_help_me = 2
oh_no = 24
plz_work = plz_help_me + oh_no
print(plz_work)
How to build a decompiler efficiently to generate this code? Should I use AST trees to map the behavior of the code? (opcodes in this case)
Lua VM is a register machine with a nearly unlimited supply of registers, which means you don't have to deal with the consequences of register allocation. It makes the whole thing much more bearable than decompiling, say, x86.
A very convenient intermediate representation for going up the abstraction level would have been an SSA. A trivial transform treating registers as local variable pointers and leaving memory loads as is, followed by an SSA-transform [1], will give you a code suitable for further analysis. Next step will be loop detection (done purely on CFG level), and, helped by SSA, detection of the loop variables and loop invariants. Once done, you'll see that only a handful of common patterns exist, that can be directly translated to higher level loops. Detecting if and other linear control flow sequences is even easier, once you're in an SSA.
One nice property of an SSA is that you can easily construct high level AST expressions out of it. You have a use count for every SSA variable, so you can simply substitute all the single-use variables (that were not produced by side-effect instructions) in place of their use (and side-effect ones too if you maintain their order). Only multi-use variables will remain.
Of course, you'll never get meaningful local variable names out of this procedure. Globals are preserved.
[1] https://pfalcon.github.io/ssabook/latest/
It looks like there is a lua decompiler for lua 5.1 written in C that can be used for study; https://github.com/viruscamp/luadec https://www.lua.org/wshop05/Muhammad.pdf describes how it works.
And on the extremely simple example you give, it comes back with the exactly same thing:
$ ./luadec ../tmp/luac.out
-- Decompiled using luadec 2.2 rev: 895d923 for Lua 5.1 from https://github.com/viruscamp/luadec
-- Command line: ../tmp/luac.out
-- params : ...
-- function num : 0
plz_help_me = 2
oh_no = 24
plz_work = plz_help_me + oh_no
print(plz_work)
There is a debug flag -d to show a little bit about how it works:
-- Decompiled using luadec 2.2 rev: 895d923 for Lua 5.1 from https://github.com/viruscamp/luadec
-- Command line: -d ../tmp/luac.out
LoopTree of function 0
FUNCTION_STMT=0x2e2f5ec0 prep=-1 start=0 body=0 end=11 out=12 block=0x2e2f5f10
----------------------------------------------
1 LOADK 0 1
SET_SIZE(tpend) = 0
next bool: 0
locals(0):
vpend(0):
tpend(1): 0{2}
Note: The Lua VM in version in 5.1 (and other versions up to the latest 5.4) seems to be stack oriented. This means that in order to compute plz_hep_me + oh_no, both values need to be pushed onto a stack and then when the ADD bytecode instruction is seen two entries are popped off of the stack and the result is pushed onto the stack. From the decompiler's standpoint, a tree fragment or "AST" fragment of the addition with its two operands is created.
In the output above, a constant is push onto stack tpend to which we see 0{2}. The stuff in brackets seems to hold the output representation. Later on, just before the ADD opcode you will see two entries on the stack. Entries are separate simply with a space in the debug output.
----------------------------------------------
2 SETGLOBAL 0 0
next bool: 0
locals(0):
vpend(1): -1{plz_help_me=2}
tpend(0):
----------------------------------------------
Seems to set variable plz_help_me from the top of the stack and push the assignment statement instead.
3 LOADK 0 3
SET_SIZE(tpend) = 0
next bool: 0
locals(0):
vpend(0):
tpend(1): 0{24}
...
At this point I guess the decompiler sees that the stack is 0 SET_SIZE(tpend)=0 or the first statement then is done.
Moving onto the next statement, the constant 24 is then loaded onto tpend.
In vpend and tpend, we see the output as it gets built up.
Looking at the C code, It looks like it loops over functions and within that loops over what it thinks are statements, and within that is driven by the a loop over bytecode instructions but "walking" them based on instruction type and information it saves.
It looks it does build an AST on the fly in a somewhat ad-hoc manner. (By "ad hoc" I simply mean programmer code builds the tree).
Although for compilation especially of optimized code Single Static Assignment (SSA) is a win, in the decompilation direction of high-level bytecode such as is found here, usually this is neither needed nor wanted. SSA maps to finite reusable names to a potentially infinite set.
In a decompiler we want to go the other direction from infinite registers (or even finite registers) back to names the programmer used and here this mapping is already given, so just use it. This is simpler too.
If the same name is used for two conceptually different computations, tracking that is probably wanted. Correcting or improving the style of the source code in the way that the decompiler writer feels might be nice, but usually isn't appreciated. (In fact, in my experience many programmers using a decompiler like this will call it a bug, when technically, it isn't.)
The luadec decompiler breaks things into functions and walks over disassembled instructions in that in an ad hoc way based on specific instructions for various opcodes. After this tree or list of tree nodes is built, an AST if you will (and that is the term this decompiler uses), there seem to be custom print routines for each AST node type.
Personally, I think there is a better way, and one that is slightly different from the one suggested in another answer.
I wrote about decompilation from this novel perspective here. I believe it applies to Lua as well. And older, longer more research oriented paper here.
As with many decompilers, logic instructions introduce control flow, and these strategies which are instruction based and that go down a path early on from the instructions, have a hard time shifting these around. So a pattern matching/parsing/term-rewrite approach where control flow is marked in the instructions such as with addition of pseudo Phi-function instructions (or its equivalent). Dominator regions from a control-flow graph also greatly help here.
Edit:
There is also unluac, a lua decompiler written in Java that can be studied. Although there are very few comments in the code describing how it works, the code it self seems well organized and the functions are pretty small and clear. It seems similar in operation in that there is a part that is bytecode operation based, and a part that is more expression-tree based. It seems to be currently worked on and is the only decompiler that supports lua versions from 5.0 up to 5.4.

Answer Set Programming - make a FACT invalid

I have a question regarding Answer Set Programming on how to make an existing fact invalid, when there is already (also) a default statement present in the Knowledge Base.
For example, there are two persons seby and andy, one of them is able to drive at once. The scenario can be that seby can drive as seen in Line 3 but let's say, after his license is cancelled he cannot drive anymore, hence we now have Lines 4 to 7 and meanwhile andy learnt driving, as seen in Line 7. Line 6 shows only one person can drive at a time, besides showing seby and andy are not the same.
1 person(seby).
2 person(andy).
3 drives(seby).
4 drives(seby) :- person(seby), not ab(d(drives(seby))), not -drives(seby).
5 ab(d(drives(seby))).
6 -drives(P) :- drives(P0), person(P), P0 != P.
7 drives(andy).
In the above program, Lines 3 and 7 contradict with Line 6, and the Clingo solver (which I use) obviously outputs UNSATISFIABLE.
Having said all this, please don't say to delete Line 3 and the problem is solved. The intention behind asking this question is to know whether it is possible now to make Line 3 somehow invalid to let Line 4 do its duty.
However, Line 4 can also be written as:
4 drives(P) :- person(P), not ab(d(drives(P))), not -drives(P).
Thanks a lot in advance.
I do not fully understand the problem. Line 3 and line 4 are separate rules, even if line 4's antecedent is false line 3 would still be true. In other words, line 4 seems redundant.
It seems like you want a choice. I assume ab(d(drives(seby))) denotes that seby has lost their license. So, below line four is your constraint on only people with a license driving. Line five is the choice, so by default andy or seby can drive but not both. Notice in the ground program how line five is equivalent to drives(seby) :- not drives(andy). and drives(andy) :- not drives(seby). You can also have seby as being the preferred driver using optimization statements (the choice rule below is like an optimization statement).
person(seby).
person(andy).
ab(d(drives(seby))).
:- person(P), ab(d(drives(P))), drives(P).
1{drives(P) : person(P)}1.
If something is true, it must always be true. Therefore the line:
drives(seby).
will always be true.
However, we can get around this by putting the fact into a choice rule.
0{drives(seby)}1.
This line says that an answer will have 0 to 1 drives(seby).. This means we can have rules that contradict drives(seby). and the answer will still be satisfiable, but we can also have drives(seby). be true.
Both this program:
0{drives(seby)}1.
drives(seby).
And this program:
0{drives(seby)}1.
:- drives(seby).
Are satisfiable.
Most likely you will want drives(seby). to be true if it can be, and false if it can't be. To accomplish this, we need to force Clingo into making drives(seby). true if it can. We can do this by using an optimization.
A naive way to do this is to count how many drives(seby). exist (either 0 or 1) and maximize the count.
We can count the number of drives(seby). with this line:
sebyCount(N) :- N = #count {drives(seby) : drives(seby)}.
N is equal to the number of drives(seby). in the domain drives(seby).. This will either be 0 or 1.
And then we can maximize the value of N with this statement:
#maximize {N#1 : sebyCount(N)}.
This maximizes the value of N with the priority 1 (the lower the number, the lower the priority) in the domain of sebyCount(N).

Shorten if condition x > 3 || y > 3 || z > 3

Is there a betther way to do this condition:
if ( x > 3 || y > 3 || z > 3 ) {
...
}
I was thinking of some bitwise operation but could'nt find anything.
I searched over Google but it is hard to find something related to that kind of basic question.
Thanks!
Edit
I was thinkg in general programming. Would it be different according to a specific language? Such as C/C++, Java...
What you have is good. Assuming C/C++ or Java:
Intent is clear
Short circuit optimisation means the expression will be true as soon as any one part is true.
Looking at that 2nd point- if any of x, y or z is more likely to be >3, then put them to the left of the expression so they are evaluated first which means the others may not need to be evaluated at all.
For argument's sake, if you must have a bitwise check x|y|z > 3 works, but it normally won't be reduced, so it's (probably*) always 2 bitwise ops and a compare, where the other way could be as fast as 1 compare.
(* This is where the language lawyers arrive an add comments why this edit is wrong and the bitwise version can be optimised;-)
There was a comment here (now deleted) along the lines of "new programmer shouldn't worry about this level of optimisation" - and it was 100% correct. Write easy to follow, working code and THEN try to squeeze performance out of it AFTER you know it is "too slow".

How does multiple assignment work?

From Section 4.1 of Programming in Lua.
In a multiple assignment, Lua first evaluates all values and only then
executes the assignments. Therefore, we can use a multiple assignment
to swap two values, as in
x, y = y, x -- swap x' fory'
How does the assignment work actually?
How multiple assignment gets implemented depends on what implementation of Lua you are using. The implementation is free to do things anyway it likes as long as it preserves the semantics. That is, no matter how things get implemented, you should get the same result as if you had saved all the values in the RHS before assigning them to the LHS, as the Lua book explains.
If you are still curious about the actual implementation, one thing you can do is see what is the bytecode that gets produced for a certain program. For example, taking the following program
local x,y = 10, 11
x,y = y,x
and passing it to the bytecode compiler (luac -l) for Lua 5.2 gives
main <lop.lua:0,0> (6 instructions at 0x9b36b50)
0+ params, 3 slots, 1 upvalue, 2 locals, 2 constants, 0 functions
1 [1] LOADK 0 -1 ; 10
2 [1] LOADK 1 -2 ; 11
3 [2] MOVE 2 1
4 [2] MOVE 1 0
5 [2] MOVE 0 2
6 [2] RETURN 0 1
The MOVE opcode assigns the value in the right register to the left register (see lopcodes.h in the Lua source for more details). Apparently, what is going on is that registers 0 and 1 are being used for x and y and slot 2 is being used as a temporary extra slot. x and y get initialized with constants in the first two opcodes and in the next three 3 opcodes a swap is performed using the "temporary" second slot, kind of like you would do by hand:
tmp = y -- MOVE 2 1
y = x -- MOVE 1 0
x = tmp -- MOVE 0 2
Given how Lua used a different approach when doing a swapping assignment and a static initialization, I wouldn't be surprised if you got different results for different kinds of multiple assignments (setting table fields is probably going to look very different, specially since then the order should matter due to metamethods...). We would need to find the part in the source where the bytecode gets emitted to be 100% sure though. And as I mentioned before, all of this might vary between Lua versions and implementations, specially if you look at LuaJIT vs PUC Lua.

Is this considered memoisation?

In optimising some code recently, we ended up performing what I think is a "type" of memoisation but I'm not sure we should be calling it that. The pseudo-code below is not the actual algorithm (since we have little need for factorials in our application, and posting said code is a firing offence) but it should be adequate for explaining my question. This was the original:
def factorial (n):
if n == 1 return 1
return n * factorial (n-1)
Simple enough, but we added fixed points so that large numbers of calculations could be avoided for larger numbers, something like:
def factorial (n):
if n == 1 return 1
if n == 10 return 3628800
if n == 20 return 2432902008176640000
if n == 30 return 265252859812191058636308480000000
if n == 40 return 815915283247897734345611269596115894272000000000
# And so on.
return n * factorial (n-1)
This, of course, meant that 12! was calculated as 12 * 11 * 3628800 rather than the less efficient 12 * 11 * 10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1.
But I'm wondering whether we should be calling this memoisation since that seems to be defined as remembering past results of calculations and using them. This is more about hard-coding calculations (not remembering) and using that information.
Is there a proper name for this process or can we claim that memoisation extends back not just to calculations done at run-time but also those done at compile-time and even back to those done in my head before I even start writing the code?
I'd call it pre-calculation rather than memoization. You're not really remembering any of the calculations you've done in the process of calculating a final answer for a given input, rather you're pre-calculating some fixed number of answers for specific inputs. Memoization as I understand it is really more akin to "caching" a set of results as you calculate them for later reuse. If you were to store each value calculated so that you didn't need to recalculate it again later, that would be memoization. Your solution differs in that you never store any "calculated" results from your program, only the fixed points that have been pre-calculated. With memoization if you reran the function with an input different than one of the pre-calculated ones it would not have to recalculate the result, it would simply reuse it.
Whether or not you are hard coding the results in, this is still memoization because you have already calculated results that you are expecting to calculate again. Now this may come in the form of run-time, or compile time.. but either way, it's memoization.
Memoization is done at run-time. You are optimizing at compile time. So, it is not.
See for example ... Wikipedia
Or ...
Memoization
The term memoization was coined by Donald Michie (1968) to refer to the process by which a function is made to automatically remember the results of previous computations. The idea has become more popular in recent years with the rise of functional languages; Field and Harrison (1988) devote a whole chapter to it. The basic idea is just to keep a table of previously computed input/result pairs.
Peter Norvig
University of California
(the bold is mine)
Link
def memoisation(f):
dct = {}
def myfunction(x):
if x not in dct:
dct[x] = f(x)
return dct[x]
return myfunction
#memoisation
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
def nb_appels(n):
if n==0 or n==1:
return 0
else:
return 1 + nb_appels(n-1) + 1 + nb_appels(n-2)
print(fibonacci(13))
print ('nbappel',nb_appels(13))