How does an interpreter resolve function calls or branch(jump) statement? - interpreter

I know the question seems a bit broad. I tried searching for answers, couldn't find much.If anyone could describe or point me to the right source.

Assuming a bytecode-based interpreter, the usual way to do this would be as follows:
You have a variable, the program counter, which tells you the index of the instruction to execute. Usually you increase that counter by 1, but when executing a branch, you instead set it to the target location of the jump.
For function calls you do the same thing, but you also push the old value of the counter plus one onto the call stack. Then when you execute the return instruction, you pop the value of the stack and set the counter to that.

Related

Why is numpy save not immediate, and can it be forced to save immediately?

I thought this question would have been asked already, but I can't find it, so here goes: I've noticed that numpy.save commands only trigger, i.e. the file to-be-created is actually created, after the entire code has finished running. This is bad when the code takes days or weeks to run, and I want to pin down exactly which function, and what arguments into the function, are causing the bottleneck.
There is a similar issue with the print() command; it doesn't write to the output file immediately but rather waits until the entire code is finished before writing. I can force it to write immediately with this code:
def printnow(*messages):
w=open("output.log","a")
for message in messages:
w.write(str(message))
w.write(" ")
w.write("\n")
w.close()
I was wondering whether it's possible to do an analogous thing, i.e. force an immediate save, for numpy arrays. No need for appending; overwriting with the current value of the numpy array is fine.
If it makes a difference, I'm not running the code on my personal computer but a group server, which I issue commands to and check on using Putty and WinSCP.
Thanks
Edit: I tried another package, shelve, and it encounters the same problem. I create a global variable called function_calls and initialize it to 0. Then, at the start of the function that I suspect is causing the bottleneck, I put in the following code:
global function_calls
file='function_inputs'+str(function_calls)
function_shelf=shelve.open(file,'n')
for key in dir():
function_shelf[key]=locals()[key]
function_calls+=1
This code is intended to create a new file that saves the function inputs, each time the function is called. Unfortunately, 9 hours into starting the run, no files have been created. So I suspect Python is just waiting until the whole run is finished before creating the files I asked it to.

Variable is changing without the program running

On the website I am writing, there is an object called person which holds a variable called balance. At one point I call the set method and change balance's value to 100 from 0.
I noticed there was a problem when the at the end running my program the value of balance was back to 0. Placing a break point where it changes balance with the code
User.person.balance = Date.Parse(txtBal_Updated.Text)
it goes through the setter and changes the value from 0 to 100. I stop the program right after this change and use the tracer to look at the value of balance and it say 100. But if I look at person and through person to balance it shows that it is 0. Then when I look back at balance it has suddenly changed back to 0 without me stepping through the program at all. I am very confused how an objects value can change without the program running.
What is the thing that you call “your program” ? Is it some JavaScript in a Web page ? How do you run it ?
What is “the tracer” ? With what tool(s) you inspect the variables ?
Your problem makes me think strongly of variable scope. You may experience some garbage-collecting too.
You focus on the variables themselves. In your situation, I suspect first the instrumentation.

Creating robust real-time monitors for variables

We can create a real-time monitor for a variable like this:
CreatePalette#Panel#Row[{"x = ", Dynamic[x]}]
(This is more interesting and useful if x happens to be something like $Assumptions. It's so easy to set a value and then forget about it.)
Unfortunately this stops working if the kernel is re-launched (Quit[], then evaluate something). The palette won't show changes in the value of x any more.
Is there a way to do this so it keeps working even across kernel sessions? I find myself restarting the kernel quite often. (If the resulting palette causes the kernel to be automatically started after Quit that's fine.)
Update: As mentioned in the comments, it turns out that the palette ceases working only if we quit by evaluating Quit[]. When using Evaluation -> Quit Kernel -> Local, it will keep working.
Link to same question on MathGroup.
I can only guess, because on my Ubuntu here the situations seems buggy. The trick with the Quit from the menu like Leonid suggested did not work here. Another one is: on a fresh Mathematica session with only one notebook open:
Dynamic[x]
x = 1
Dynamic[x]
x = 2
gives as expected
2
1
2
2
Typing in the next line Quit, evaluating and typing then x=3 updates only the first of the Dynamic[x].
Nevertheless, have you checked the command
Internal`GetTrackedSymbols[]
This gives not only the tracked symbols but additionally some kind of ID where the dynamic content belongs. If you can find out, what exactly these numbers are and investigate in the other functions you find in the Internal context, you may be able to add your palette Dynamic-content manually after restarting the kernel.
I thought I had something like that with
Internal`SetValueTrackExtra
but I'm currently not able to reproduce the behavior.
#halirutan's answer jarred my memory...
Have you ever come across: Experimental/ref/ValueFunction? (documentation address)
Although the documentation contains no examples, the 'more information' section provides the following tidbit:
The assignment ValueFunction[symb] = f specifies that whenever
symb gets a new value val, the expression f[symb,val] should be
evaluated.

set a breakpoint, when called: return and continue

I know how to do this in gdb. I'd attach, and follow with:
break myfunction
commands
return
cont
end
cont
I'm wondering if there's a way of doing this in c? I already have my code working for reading memory addresses and writing to memory addresses. And it automatically finds the pid and does related stuff. I'm stuck with implementing that use of breakpoints.
If you are talking about some sort of hand-written debugger, you can use IP value to set a breakpoint; Literally, when IP hits some certain value, you stop the program being debugged and perform some routine (for example, heading away to debugger process). To use function names, you should use symbol tables like it is done in GDB.
It's not quite clear what you are trying to achieve.
The GDB sequence you've show will simply make myfunction immediately return.
Assuming you want your mini-debugger to have the same effect, simply write the opcode for ret (0xC3 on x86) to the address of myfunction; no need to do the breakpoint at all.

How to really trap all errors with $etrap in Intersystems Caché?

I've been banging my head a lot because of this. In the way that $etrap (error handling special variable) was conceived you must be careful to really trap all errors. I've been partially successful in doing this. But I'm still missing something, because when run in user mode (application mode) there are internal Cache library errors that are still halting the application.
What I did was:
ProcessX(var)
set sc=$$ProcessXProtected(var)
w !,"after routine call"
quit sc
ProcessXProtected(var)
new $etrap
;This stops Cache from processing the error before this context. Code
; will resume at the line [w !,"after routine call"] above
set $etrap="set $ECODE = """" quit:$quit 0 quit"
set sc=1
set sc=$$ProcessHelper(var)
quit sc
ProcessHelper(var)
new $etrap
; this code tells Cache to keep unwindind error handling context up
; to the previous error handling.
set $etrap="quit:$quit 0 quit"
do AnyStuff^Anyplace(var)
quit 1
AnyStuffFoo(var)
; Call anything, which might in turn call many sub routines
; The important point is that we don't know how many contexts
; will be created from now on. So we must trap all errors, in any
; case.
;Call internal Cache library
quit
After all this, I can see that when I call the program from a prompt it works! But when I call from Cache Terminal Script (application mode, I was told) it fails and aborts the program (the error trapping mechanism doesn't work as expected).
Is is possible that an old-style error trap ($ZTRAP) is being set only in Usermode?
The documentation on this is pretty good, so I won't repeat it all here, but a key point is that $ZTRAP isn't New-ed in the same way as $ETRAP. In a way, it is "implicitly new-ed", in that its value only applies to the current stack level and subsequent calls. It reverts to any previous value once you Quit up past the level it was set in.
Also, I'm not sure if there's a defined order of precedence between $ETRAP and $ZTRAP handlers, but if $ZTRAP is of higher precedence, that would override your $ETRAPs.
You could try setting $ZTRAP yourself right before you call the library function. Set it to something different than $ETRAP so you can be sure which one was triggered.
Even that might not help though. If $ZTRAP is being set within the library function, the new value will be in effect, so this won't make a difference. This would only help you if the value of $ZTRAP came from somewhere further up the stack.
You didn't mention what library function caused this. My company has source code for some library functions, so if you can tell me the function name I'll see what I can find. Please give me the value of $ZVersion too so I can be sure we're talking about the same version of Cache.