Kotlin Native Pointer initialization - kotlin

I have a little bit of a fight with Kotlin Native and the runtime. In short: I am building a jvmti agent, linking a dynamic library.
Now I have following case, what I like to achieve can be expressed in C like:
char* class_sig;
(*jvmti)->GetClassSignature(object_klass, &class_sig, NULL)
do something with class_sig....
(*jvmti)->Deallocate((unsigned char*) class_sig);
So in that case the jvmti environment allocates the memory for class_sig, that is why I have to deallocate through the jvmti environment.
How can this be achieved in Kotlin? I am a little on the fence regarding calling nativeheap.alloc, wouldn't that cause a memory leak because the jvmti environment already allocates memory?
Or can I just do:
val signaturePtr = nativeHeap.alloc<CPointerVar<ByteVar>>()
jvmti?.pointed?.pointed?.GetClassSignature?.invoke(jvmti, klass, signaturePtr.ptr, null)
Call jvmti dealloc?

Kotlin Native way is to use memScoped blocks for such task. Take a look at official guide for C interop
If you write
memScoped {
val signaturePtr = alloc<CPointerVar<ByteVar>>()
//...
}
Kotlin will take care of memory deallocation inside memScoped block, no need to invoke jmti Deallocate

Related

OBJC_PRINT_VTABLE_IMAGES and OBJC_PRINT_VTABLE_SETUP does not show any output

I've tried to use OBJC_PRINT_VTABLE_IMAGES and OBJC_PRINT_VTABLE_SETUP environmental variables on Objective-C executable in order to learn about vtable mechanism in Objective-C objects. Unfortunately the mentioned environment variables have no effect on console output, despite the fact that runtime acknowledged that the variables were set:
ยป OBJC_PRINT_OPTIONS=1 OBJC_PRINT_VTABLE_IMAGES=YES /Applications/TextEdit.app/Contents/MacOS/TextEdit
objc[41098]: OBJC_PRINT_OPTIONS is set
objc[41098]: OBJC_PRINT_VTABLE_IMAGES is set
I've tried to use both variables on executables provided by system (TextEdit) and my own. With no effect.
Whole vtable mechanism in Objective-C objects is obscure. It's hard to find information about this mechanism on Apple pages. There is some info from other sources, but no official documentation:
http://www.sealiesoftware.com/blog/archive/2011/06/17/objc_explain_objc_msgSend_vtable.html
http://cocoasamurai.blogspot.com/2010/01/understanding-objective-c-runtime.html
Why these variables are not working? Does vtables in current version of Objective-C are deprecated?
In this case, the answer is pretty straightforward - vtable dispatch is no longer optimized in the objective-c runtime, and was probably a bad idea in the first place.
vtable-based dispatch was one of the first attempts to speed up frequent calls in the objective-c runtime, but note that it predates the current method caching solution. The problem with using a fixed set of selectors as in the vtable solution not only means increased memory for every class in the runtime, but it also means that if you're using an architecture which doesn't result in isEqualToString: being called frequently, for example, you now have a completely wasted pointer for EVERY class in the runtime that overrides ONE of those selectors. Whoops.
Also, note that Vtable dispatch by design couldn't work on 32-bit architectures, which meant that once the iOS SDK was released, and 32bit was again a reasonable target for objective-c, that optimization simply couldn't work.
The relevant documentation that I can find for this is in objc-abi.h:
#if TARGET_OS_OSX && defined(__x86_64__)
// objc_msgSend_fixup() is used for vtable-dispatchable call sites.
OBJC_EXPORT void objc_msgSend_fixup(void)
__OSX_DEPRECATED(10.5, 10.8, "fixup dispatch is no longer optimized")
__IOS_UNAVAILABLE __TVOS_UNAVAILABLE __WATCHOS_UNAVAILABLE;
Nowadays, there aren't many vestigial fragments of vtable dispatch left in the runtime. A quick grep over the codebase shows a few places in objc-runtime-new.mm:
#if SUPPORT_FIXUP
// Fix up old objc_msgSend_fixup call sites
for (EACH_HEADER) {
message_ref_t *refs = _getObjc2MessageRefs(hi, &count);
if (count == 0) continue;
if (PrintVtables) {
_objc_inform("VTABLES: repairing %zu unsupported vtable dispatch "
"call sites in %s", count, hi->fname());
}
for (i = 0; i < count; i++) {
fixupMessageRef(refs+i);
}
}
ts.log("IMAGE TIMES: fix up objc_msgSend_fixup");
#endif
And
*********************************************************************
* fixupMessageRef
* Repairs an old vtable dispatch call site.
* vtable dispatch itself is not supported.
**********************************************************************/
static void
fixupMessageRef(message_ref_t *msg)
Which pretty clearly indicates that it's not supported.
See also, the method stub for it (if you were to do it without a compiler generated call-site), found in objc-msg-x86_64.s:
ENTRY _objc_msgSend_fixup
int3
END_ENTRY _objc_msgSend_fixup
Where int3 is the SIGTRAP instruction, which would cause a crash if a debugger isn't attached (usually).
So, while vtable dispatch is an interesting note in the history of objective-c, it should be looked back as little more than an experiment when we weren't quite familiar with the best ways to optimize common method calls.

JVM Memory Segments and JIT Compiler

I know this is JVM dependent and every virtual machine would choose to implement it a little bit different yet I want to understand the overall concept.
It has been said that for the Memory Segments that the JVM uses to execute a Java program
Java Stacks
Heap
Method Area
PC Registers
Native Method Stacks
are not necessarily implemented with contiguous memory and may be all actually allocated on some heap memory provided from the OS, this leads me to my question.
JVM's that fully use the JIT mechanism and compiles bytecode methods
into native machinecode methods store these methods somewhere, where
would that be? the execution engine ( that is usually written in C /
C++ ) would have to invoke these JIT compiled functions, yet the kernel shouldn't allow a program to execute code saved on the stack / heap / static memory segment, how could the JVM overcome this?
Another question I have is regarding the Java stacks, when a method ( after JIT compilation ) is executed within the processor it's local variables should be saved within the Java stacks, yet again the Java stacks may be implemented with a non-contiguous memory and perhaps even just some stack data structure allocated on the heap acting as a stack, how and where do the local variables of a method being executed get saved? the kernel shouldn't allow a program to treat a heap allocated memory as a process stack, how does JVM overcome this difficuly as well?
Again, I want to emphasis that I'm asking for an overall concept, I know each JVM would choose to implement this a little bit different...
JVM's that fully use the JIT mechanism and compiles bytecode methods into native machinecode methods store these methods somewhere, where would that be?
It is stored in the "Perm Gen" in Java <= 7 and "meta space" in Java 8. This is another native memory region.
the execution engine ( that is usually written in C / C++ ) would have to invoke these JIT compiled functions, yet the kernel shouldn't allow a program to execute code saved on the stack / heap / static memory segment, how could the JVM overcome this?
The memory region is both writable and executable, though I don't know exactly which system call is required to implement this.
Another question I have is regarding the Java stacks, when a method ( after JIT compilation )
Initially the code is not compiled but it uses the stack in the same way.
is executed within the processor it's local variables should be saved within the Java stacks, yet again the Java stacks may be implemented with a non-contiguous memory
There is a stack per thread which is continuous.
and perhaps even just some stack data structure allocated on the heap acting as a stack, how and where do the local variables of a method being executed get saved?
On the thread stack.
the kernel shouldn't allow a program to treat a heap allocated memory as a process stack, how does JVM overcome this difficuly as well?
It doesn't do this.

Using C++11 lambda functions in ARC ObjectiveC++ - how to do it properly?

I have an ObjectiveC++ project. In the ObjectiveC context I am using ARC and iPhoneSDK 6. In C++ I am using a C++11 compiler.
Lambda functions in C++11 are capturing variables with references. This concept is not really supported by ObjectiveC and by "try and error" I came up with the following solution. Are there any pitfalls I am not aware of?
Is there a better solution to this problem?
typedef std::function<void ()> MyLambdaType;
...
// m_myView will not go away. ARC managed.
UIView * __strong m_myView;
...
// In Objective C context I create a lambda function that calls my Objective C object
UIView &myViewReference = *m_myView;
MyLambdaType myLambda = [&myViewReference]() {
UIView *myViewBlockScope = &myViewReference;
// Do something with `myViewBlockScope`
}
..
// In C++11 context I call this lambda function
myLambda();
The straightforward thing to do would be to let the lambda capture the object pointer variable m_myView (I am assuming from your snippet that this is a local variable), and use it normally inside the lambda:
MyLambdaType myLambda = [m_myView]() {
// Do something with `m_myView`
}
The only concern would be the memory management of m_myView. To be generally correct, the lambda needs to retain m_myView when it is created, and release it when it is destroyed (just like blocks do; because the lambda could be used in a scope where m_myView does not exist).
Reading through the ARC docs, I don't see this situation mentioned specifically, but I believe that it should handle it properly, because (1) captured variables of a C++11 lambda are stored as fields of an anonymous class, which are initialized to the captured value when the lambda is constructed, and (2) ARC properly handles the retaining and releasing of Objective-C object fields of C++ classes on construction and destruction. Unless it says something specifically about lambdas to the contrary, or there's a compiler bug, I see no reason why it should not work.

Creating a JVM per JNI_CreateJavaVM, receiving a OutOfMemoryError

I'm creating a JVM out of a C++-program per JNI, and the creation itself works fine. The communication with the JVM works also fine; I am able to find classes, create objects, call methods and so on. But one of my methods needs quite a lot of memory, and the JVM throws a OutOfMemoryError when calling it. Which I don't understand, as there is more than one GB of free RAM available. The whole process uses about 200MB and it seems that it doesn't even try to allocate more; it sticks at 200MB and then the exceptions is thrown.
I tried to pass the -Xmx-option to the JVM, but it won't work when the JVM is created through JNI. As far as I understood, a JVM created through JNI should be able to access all the memory available, making the -Xmx-options unnecessary - but obviously this assumption is wrong.
So the question is, how can I say the JVM that it just should use as much as memory as it needs?
System: MacOS 10.6
Creation of the JVM:
JNIEnv *env;
JavaVMInitArgs vm_args;
JavaVMOption options;
//Path to the java source code
options.optionString = jvm_options; // setting the classpath
vm_args.version = JNI_VERSION_1_6; //JDK version. This indicates version 1.6
vm_args.nOptions = 1;
vm_args.options = &options;
vm_args.ignoreUnrecognized = 0;
int ret = JNI_CreateJavaVM(jvm, (void**)&env, &vm_args);
if(ret < 0)
printf("\nUnable to Launch JVM\n");
Seems like I got something wrong with the -Xmx-option - tried it again and it works now.

How is native code handled by the JVM

Consider an case that I have to call C++ code from my Java Program. The C++ code creates thousands of Objects. Where are these dynamic objects stored ? I suspect in the JVM heap because the native code will be a part of the same process as the JVM.
If yes, do the rules of Java Garbage collector thread apply on Objects of the C++ code ?
For the first question, C++ will allocate resources using its own runtime which has nothing to do with the JVM - the JVM is not aware of any activity in this memory allocator.
For the second question, the Java garbage collector will not GC the memory allocated by C++. You will have to make sure that your Java wrapper initiates the memory release. Before an object is GC'd by java, the runtime calls the finalize() method. The default one is inherited from java.lang.Object and basically does nothing. You can override this and use it as a hook to initiate deallocating your manually managed memory.