What does the stack of this function look like? - jvm

I have to following question.
Given is the function:
show(int a, int b)
{
int v1;
int v2;
}
a and b are parameters. v1 en v2 are local variables. Draw a stack in which its is clear where a, b, v1 , v2, the old frame pointer en the return adress are. Also show where the high and low stack addresses are.
I hope i have been clear enough.
Edit:
What i have now is:
v2 <-- SP
v1
prevLV <-- LV
Ra
a
b

Drawing a definitive picture is hard as it depends on the implementation of the JVM, but what you have now is VERY unlikely to be correct.
Because Java doesn't distinguish args from temps (see the iload, etc.. bytecodes), they're going to need to appear side by side in memory, or someone's going to have to copy them from the caller pending stack to the callee at frame construction time (which tends to be expensive).

Related

Nim sequence assignment value semantics

I was under the impression that sequences and strings always get deeply copied on assignment. Today I got burned when interfacing with a C library to which I pass unsafeAddr of a Nim sequence. The C library writes into the memory area starting at the passed pointer.
Since I don't want the original Nim sequence to be changed by the library I thought I'll simply copy the sequence by assigning it to a new variable named copy and pass the address of the copy to the library.
Lo and behold, the modifications showed up in the original Nim sequence nevertheless. What's even more weird is that this behavior depends on whether the copy is declared via let copy = ... (changes do show up) or via var copy = ... (changes do not show up).
The following code demonstrates this in a very simplified Nim example:
proc changeArgDespiteCopyAssignment(x: seq[int], val: int): seq[int] =
let copy = x
let copyPtr = unsafeAddr(copy[0])
copyPtr[] = val
result = copy
proc dontChangeArgWhenCopyIsDeclaredAsVar(x: seq[int], val: int): seq[int] =
var copy = x
let copyPtr = unsafeAddr(copy[0])
copyPtr[] = val
result = copy
let originalSeq = #[1, 2, 3]
var ret = changeArgDespiteCopyAssignment(originalSeq, 9999)
echo originalSeq
echo ret
ret = dontChangeArgWhenCopyIsDeclaredAsVar(originalSeq, 7777)
echo originalSeq
echo ret
This prints
#[9999, 2, 3]
#[9999, 2, 3]
#[9999, 2, 3]
#[7777, 2, 3]
So the first call changes originalSeq while the second doesn't. Can someone explain what is going on under the hood?
I'm using Nim 1.6.6 and a total Nim newbie.
Turns out there are a lot of issues concerned with this behavior in the nim-lang issue tracker. For example:
let semantics gives 3 different results depends on gc, RT vs VM, backend, type, global vs local scope
Seq assignment does not perform a deep copy
Let behaves differently in proc for default gc
assigning var to local let does not properly copy in default gc
clarify spec/implementation for let: move or copy?
RFC give default gc same semantics as --gc:arc as much as possible
Long story short, whether a copy is made depends on a lot of factors, for sequences especially on the scope (global vs. local ) and the gc (refc, arc, orc) in use.
More generally, the type involved (seq vs. array), the code generation backend (C vs. JS) and whatnot can also be relevant.
This behavior has tricked a lot of beginners and is not well received by some of the contributors. It doesn't happen with the newer GCs --gc:arc or --gc:orc where the latter is expected to become the default GC in an upcoming Nim version.
It has never been fixed in the current default gc because of performance concerns, backward compatibility risks and the expectation that it will disappear anyway once we transition to the newer GCs.
Personally, I would have expected that it at least gets clearly singled out in the Nim language manual. Well, it isn't.
I took a quick look at the generated C code, a highly edited and simplified version is below.
The essential thing that is missing in the let code = ... variant is the call to genericSeqAssign() which makes a copy of the argument and assigns that to copy, instead the argument is assigned to copy directly. So, there is definitely no deep copy on assignment in that case.
I don't know if that is intentional or if it is a bug in the code generation (my first impression was astonishment). Any ideas?
tySequence_A* changeArgDespiteCopyAssignment(tySequence_A* x, NI val) {
tySequence_A* result = NIM_NIL;
tySequence_A* copy;
NI* copyPtr;
copy = x; /* NO genericSeqAssign() call !*/
copyPtr = &copy->data[(NI) 0];
*copyPtr = val;
genericSeqAssign(&result, copy, &NTIseqLintT_B);
return result;
}
tySequence_A* dontChangeArgWhenCopyIsDeclaredAsVar(tySequence_A* x, NI val) {
tySequence_A* result = NIM_NIL;
tySequence_A copy;
NI* copyPtr;
genericSeqAssign(&copy, x, &NTIseqLintT_B);
copyPtr = &copy->data[(NI) 0];
*copyPtr = val;
genericSeqAssign(&result, copy, &NTIseqLintT_B);
return result;
}

What is the order of local variables on the stack?

I'm currently trying to do some tests with the buffer overflow vulnerability.
Here is the vulnerable code
void win()
{
printf("code flow successfully changed\n");
}
int main(int argc, char **argv)
{
volatile int (*fp)();
char buffer[64];
fp = 0;
gets(buffer);
if(fp) {
printf("calling function pointer, jumping to 0x%08x\n", fp);
fp();
}
}
The exploit is quite sample and very basic: all what I need here is to overflow the buffer and override the fp value to make it hold the address of win() function.
While trying to debug the program, I figured out that fb is placed below the buffer (i.e with a lower address in memory), and thus I am not able to modify its value.
I thought that once we declare a local variable x before y, x will be higher in memory (i.e at the bottom of the stack) so x can override y if it exceeds its boundaries which is not the case here.
I'm compiling the program with gcc gcc version 5.2.1, no special flags (only tested -O0)
Any clue?
The order of local variable on the stack is unspecified.
It may change between different compilers, different versions or different optimization options. It may even depend on the names of the variables or other seemingly unrelated things.
The order of local variables on the stack is not defined until compile/link (build) time. I'm no expert certainly, but I think you'd have to do some sort of a "hex dump", or perhaps run the code in a debugger environment to find out where it's allocated. And I'd also guess that this would be a relative address, not an absolute one.
And as #RalfFriedl has indicated in his answer, the location could change under any number of compiler options invoked for different platforms, etc.
But we do know it's possible to do buffer overflows. Although you do hear less about them now due to defensive measures such as address space layout randomization (ASLR), they're still around and paying the bills for someone I'd guess. There are of course many many online articles on the subject; here's one that seems fairly current(https://www.synopsys.com/blogs/software-security/detect-prevent-and-mitigate-buffer-overflow-attacks/).
Good luck (should you even say that to someone practicing buffer overflow attacks?). At any rate, I hope you learn some things, and use it for good :)

obj-c pass struct pointer to arm assembly function - corruption [duplicate]

This question already has an answer here:
return floats to objective-c from arm assembly function
(1 answer)
Closed 6 years ago.
I have a structure in obj-c. I pass a pointer to this structure to an arm assembly function that i've written. When i step into the code i see the pointer get successfully passed in and i can access and modify the values of the structure elements from within my asm code. Life is good - until i return from the asm function. After returning to the calling obj-c code the structure values are all hosed. I can't figure out why. Below are the relevant pieces of my code.
struct myValues{ // define my structure
int ptr2A; // pointer to first float
float A;
float B;
float C;
float D;
float E;
float F;
}myValues;
struct myValues my_asm(int ptr2a, float A, float B, float C, float D, float E, float F); // Prototype for the ASM function
…code here to set values of A-F...
float* ptr2A = &myValues.A; //get the memory address where A is stored
myValues.ptr2A = ptr2A; //put that address into myValues.ptr2A and pass to the ASM function
// now call the ASM code
myValues = my_asm(myValues.ptr2A, myValues.A, myValues.B, myValues.C, myValues.D, myValues.E, myValues.F);
Here is relevant part of my asm code:
mov r5, r1 // r1 has pointer to the first float A
vdiv.f32 s3, s0, s0 //this line puts 1.0 in s3 for ease in debugging
vstr s3, [r5] // poke the 1.0 into the mem location of A
bx lr
When i step through the code everything works as expected and i end up with a 1.0 in the memory location for A. But, once i execute the return (bx lr) and return to the calling obj-c code the values in my structure become garbage. I've dug through the ABI and AACPS (as successfully as a novice probably can) but can't get this figured out. What is happening after that "bx lr" to wack the structure?
Below is "Rev 1" of my asm code. I removed everything except these lines:
_my_asm:
vdiv.f32 s3, s0, s0 // s3 = 1.0
vstr s3, [r1]
bx lr
Ok, this was solution for me. Below is "Rev 2" of the relevant pieces of my obj-c code. I was conflating passing a pointer with passing a copy of the structure - totally hose. This code just passes a pointer to the first float in my struct...which my asm code picks up from general register r0. Man, i'm hard headed. ;-)
void my_asm2(int myptr); // this is my prototype.
This is where i call the asm2 code from my obj-c code:
my_asm2(&myValues.A);
My asm2 code looks like this:
_my_asm2: ; #simple_asm_function
// r0 has pointer to the first float of my myValues structure
// Add prolog code here to play nice
vdiv.f32 s3, s0, s0 //result S3 = 1.0
vstr s3, [r0] // poking a 1.0 back into the myValues.A value
// Add Epilog code here to play nice
bx lr
So, in summary, i can now pass a pointer to my structure myValues to my ASM code and inside my ASM code i can poke new values back into those memory locations. When i return to my calling obj-c code everything is as expected. Thanks to those who helped me fumble along with this hobby. :-)
The solution here is to simply pass a pointer (that points to the memory location of the first float variable in the structure) to the assembly function. Then, any changes the assembly function makes to those memory locations will be intact upon returning to the calling function. Note that this applies to the situation when you are calling assembly code and want that code to operate on an existing data structure (myValues in this case).

How to find the fixpoint of a loop and why do we need this? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I know that in static analysis of program, we need to find fixpoint to analysis the info loop provided.
I have read the wiki as well as related meterials in the book Secure_programming_with_Static_Analysis.
But I am still confused with the concept fixpoint, so my questions are:
could anyone give me some explanations of the concept, fixpoint?
What is the practical way(ways) to find the fixpoint in static analysis?
What information can we get after finding the fixpoint?
Thank you!
Conceptually, the fixpoint corresponds to the most information you obtain about the loop by repeatedly iterating it on some set of abstract values. I'm going to guess that by "static analysis" you're referring here to "data flow analysis" or the version of "abstract interpretation" that most closely follows data flow analysis: a simulation of program execution using abstractions of the possible program states at each point. (Model checking follows a dual intuition in that you're simulating program states using an abstraction of possible execution paths. Both are approximations of concrete program behavior. )
Given some knowledge about a program point, this "simulation" corresponds to the effect that we know a particular program construct must have on what we know. For example, at some point in a program, we may know that x could (a) be uninitialized, or else have its value from statements (b) x = 0 or (c) x = f(5), but after (d) x = 42, its value can only have come from (d). On the other hand, if we have
if ( foo() ) {
x = 42; // (d)
bar();
} else {
baz();
x = x - 1; // (e)
}
then the value of x afterwards might have come from either of (d) or (e).
Now think about what can happen with a loop:
while ( x != 0 ) {
if ( foo() ) {
x = 42; // (d)
bar();
} else {
baz();
x = x - 1; // (e)
}
}
On entry, we have possible definitions of x from {a,b,c}. One pass through the loop means that the possible definitions are instead drawn from {d,e}. But what happens if foo() fails initially so that the loop does not run at all? What are the possibilities for x then? Well, in this case, the loop body has no effect, so the definitions of x would come from {a,b,c}. But if it ran, even once, then the answer is {d,e}. So what we know about x at the end of the loop is that the loop either ran or it didn't, which means that the assignment to x could be any one or {a,b,c,d,e}: the only safe answer here is the union of the property known at loop entry ({a,b,c}) and the property know at the end of one iteration ({d,e}).
But this also means that we must associate x with {a,b,c,d,e} at the beginning of the loop body, too, since we have no way of determining whether this is the first or the four thousandth time through the loop. So we have to consider again what we can have on loop exit: the union of the loop body's effect with the property assumed to hold on entry to the last iteration. Happily, this is just {a,b,c,d,e} ∪ {d,e} = {a,b,c,d,e}. In other words, we've not obtained any additional information through this second simulation of the loop body, and thus we can stop, since no further simulated iterations will change the result.
That's the fixpoint: the abstraction of the program state that will cause simulation to produce exactly the same result.
Now as for ways to find it, there are many, though the most straightforward ("chaotic iteration") simply runs the simulation of every program point (according to some fair strategy) until the answer doesn't change. A good starting point for learning better algorithms can be found in most any compilers textbook, though it isn't usually taught in a first course. Steven Muchnick's Advanced Compiler Design and Implementation is a more thorough and very readable treatment of the subject. If you can find a copy, Matthew Hecht's Flow Analysis of Computer Programs is another classic treatment. Both books focus on the "data flow analysis" technique for static analysis. You might also try out Principles of Program Analysis, by Nielson/Nielson/Hankin, though the technical details in the book can be pretty hairy. On the other hand, it offers a more general treatment of static analysis overall.

Some questions on DLR call site caching

Kindly correct me if my understanding about DLR operation resolving methodology is wrong
DLR has something call as call site caching. Given "a+b" (where a and b are both integers), the first time it will not be able to understand the +(plus) operator as wellas the operand. So it will resolve the operands and then the operators and will cache in level 1 after building a proper rule.
Next time onwards, if a similar kind of match is found (where the operand are of type integers and operator is of type +), it will look into the cache and will return the result and henceforth the performance will be enhanced.
If this understanding of mine is correct(if you fell that I am incorrect in the basic understanding kindly rectify that), I have the below questions for which I am looking answers
a) What kind of operations happen in level 1 , level 2
and level 3 call site caching in general
( i mean plus, minus..what kind)
b) DLR caches those results. Where it stores..i mean the cache location
c) How it makes the rules.. any algorithm.. if so could you please specify(any link or your own answer)
d) Sometime back i read somewhere that in level 1 caching , DLR has 10 rules,
level 2 it has 20 or something(may be) while level 3 has 100.
If a new operation comes, then I makes a new rule. Are these rules(level 1 , 2 ,3) predefined?
e) Where these rules are kept?
f) Instead of passing two integers (a and b in the example),
if we pass two strings or one string and one integer,
whether it will form a new rule?
Thanks
a) If I understand this correctly currently all of the levels work effectively the same - they run a delegate which tests the arguments which matter for the rule and then either performs an operation or notes the failure (the failure is noted by either setting a value on the call site or doing a tail call to the update method). Therefore every rule works like:
public object Rule(object x, object y) {
if(x is int && y is int) {
return (int)x + (int)y;
}
CallSiteOps.SetNotMatched(site);
return null;
}
And a delegate to this method is used in the L0, L1, and L2 caches. But the behavior here could change (and did change many times during development). For example at one point in time the L2 cache was a tree based upon the type of the arguments.
b) The L0 cache and the L1 caches are stored on the CallSite object. The L0 cache is a single delegate which is always the 1st thing to run. Initially this is set to a delegate which just updates the site for the 1st time. Additional calls attempt to perform the last operation the call site saw.
The L1 cache includes the last 10 actions the call site saw. If the L0 cache fails all the delegates in the L1 cache will be tried.
The L2 cache lives on the CallSiteBinder. The CallSiteBinder should be shared across multiple call sites. For example there should generally be one and only one additiona call site binder for the language assuming all additions are the same. If the L0 and L1 all of the available rules in the L2 cache will be searched. Currently the upper limit for the L2 cache is 128.
c) Rules can ultimately be produced in 2 ways. The general solution is to create a DynamicMetaObject which includes both the expression tree of the operation to be performed as well as the restrictions (the test) which determines if it's applicable. This is something like:
public DynamicMetaObject FallbackBinaryOperation(DynamicMetaObject target, DynamicMetaObject arg) {
return new DynamicMetaObject(
Expression.Add(Expression.Convert(target, typeof(int)), Expression.Convert(arg, typeof(int))),
BindingRestrictions.GetTypeRestriction(target, typeof(int)).Merge(
BindingRestrictions.GetTypeRestriction(arg, typeof(int))
)
);
}
This creates the rule which adds two integers. This isn't really correct because if you get something other than integers it'll loop infinitely - so usually you do a bunch of type checks, produce the addition for the things you can handle, and then produce an error if you can't handle the addition.
The other way to make a rule is provide the delegate directly. This is more advanced and enables some advanced optimizations to avoid compiling code all of the time. To do this you override BindDelegate on CallSiteBinder, inspect the arguments, and provide a delegate which looks like the Rule method I typed above.
d) Answered in b)
e) I believe this is the same question as b)
f) If you put the appropriate restrictions on then yes (which you're forced to do for the standard binders). Because the restrictions will fail because you don't have two ints the DLR will probe the caches and when it doesn't find a rule it'll call the binder to produce a new rule. That new rule will come back w/ a new set of restrictions, will be installed in the L0, L1, and L2 caches, and then will perform the operation for strings from then on out.