Segmentation fault in JNI code when calling "CallIntMethod(jclass, jmethodId, ...) - jvm

My JNI code consists of calling some java functions (.jar file) from a C++ main.
The code compiles well, but during execution i get :
Segmentation fault (core dumped)
i ran GNU gdb to debug and i found the following during the call of this method :
if(mid != 0) {
doub = env->CallIntMethod(cls,mid,10);
Program received signal SIGSEGV, Segmentation fault.
0x00000000678856ed in jvm!JNI_GetCreatedJavaVMs ()
from /cygdrive/c/Program Files/Java/jdk1.8.0_05/jre/bin/server/jvm.dll
I also checked if the class is found (JNI FindClass function) and if the JVM is created (JNI_CreateJavaVM function) through the return value, and everything seems to be just fine.
At the end of the debugging the threads exit with code 35584 :
[Thread 4632.0x1304 exited with code 35584]
I didn't find anything about this value except that it implies a problem in the path to something required by the executable ... Any ideas about this ?
I specified the path for the .jar file as following :
char op[] = "-Djava.class.path=D:\\path\\tojar/MyJar.jar;D:\\path\\toclass";
options[0].optionString = op;
Thank you StackOverflowers :)
ps : if you think that posting the code can help please notify me in the comments !

CallIntMethod expects the first argument to be the receiver (i.e. this) object.
When the method you call is static, use CallStaticIntMethod.

Related

Metal assertion `A command encoder is already encoding to this command buffer`

I am using Metal in my project and I have encapsulated some of the kernels as functions kind of the same way as MetalPerformanceShaders suggests.
So each my Metal kernel has Objective-C class with the method:
- (void)encodeToCommandBuffer:(id<MTLCommandBuffer>)cmdBuffer
inputTexture:(id<MTLTexture>)inputTexture
outputTexture:(id<MTLTexture>)outputTexture
inputSize:(TextureSize)inputSize
outputSize:(TextureSize)outputSize
{
id<MTLComputeCommandEncoder> enc = [cmdBuffer computeCommandEncoder];
[enc setComputePipelineState:_state];
//set arguments to the state
[enc dispatchThreadgroups:_threadgroupsPerGrid threadsPerThreadgroup:_threadsPerThreadgroup];
[enc endEncoding];
}
The problem is that my code crashes with the assertion:
failed assertion A command encoder is already encoding to this command buffer
Issue is random, happens at different functions. The error description is self explanatory, but what I am curious is - crashes happen in my encodeToCommandBuffer methods. In the pipeline I also use Image Processing functions from MetalPerformanceShaders and these are also get called with encodeToCommandBuffer method and these do not crash.
So it is clear that my understanding how encodeToCommandBuffer method should be written is wrong. How do I need to modify the code? Do I need to check for cmdBuffer state somehow? That it is ready to produce new Encoder. And what if it's not? Do I need to have some sort of while loop that will wait until buffer is ready?
Ok, sorted out. My pipeline is processing multiple instances in parallel, and I made a mistake in the code - pipeline tried to process all instances through the same command buffer, when it was not intended.

Are LLVM "fatal errors" really fatal?

I'm wondering if LLVM fatal errors are really "fatal" - ie. they invalidate the entire state of the system and are not recoverable.
For example (I'm using the llvm-c interface), the default behavior of the following code:
LLVMMemoryBufferRef mb = LLVMCreateMemoryBufferWithMemoryRange(somedata, data_length, "test", 0);
LLVMModuleRef module;
if (LLVMParseBitcode2(mb, &module) != 0) {
fprintf(stderr, "could not parse module bitcode");
}
is that if the pointer somedata points to invalid bitcode, the fprintf is never executed, but instead the entire process aborts with its own fatal error message on stderr.
However, there is supposedly an interface to catch such errors: LLVMFatalErrorHandler. However, after installing an error handler, the process still just aborts without calling the error handler.
The documentation in LLVM is very poor overall, and the C interface is barely documented at all. But it seems like super-fragile design to have the entire process abort in a mandatory way if some bitcode is corrupt!
So, I'm wondering if "fatal" here implies, as usual - that if such an error occurs, we may not recover and continue using the library (for example trying some different bitcode or repairing the old one, for example), or if it is not really a "fatal" error and we can have the FatalErrorHandler or some other means of catching and notify, or take other remediating actions, and continue the program.
Ok, after reading through the LLVM source for 10+ hours and enlisting the help of a friendly LLVM dev, the answer here is that this is not in fact a fatal error, after all!
The functions called above in the C interface are deprecated and should have been removed; LLVM used to have a notion of "global context", and that was removed years ago. The correct way to do this - so that this error can be caught and handled without aborting the process - is to use the LLVMDiagnosticInfo interface after creating an LLVMContext instance and using the context-specific bitcode reader functions:
void llvmDiagnosticHandler(LLVMDiagnosticInfoRef dir, void *p) {
fprintf(stderr, "LLVM Diagnostic: %s\n", LLVMGetDiagInfoDescription(dir));
}
...
LLVMContextRef llvmCtx = LLVMContextCreate();
LLVMContextSetDiagnosticHandler(llvmCtx, llvmDiagnosticHandler, NULL);
LLVMMemoryBufferRef mb = LLVMCreateMemoryBufferWithMemoryRange(somedata, data_length, "test", 0);
LLVMModuleRef module;
if (LLVMGetBitcodeModuleInContext2(llvmCtx, mb, &module) != 0) {
fprintf(stderr, "could not parse module bitcode");
}
The LLVMDiagnosticInfo also carries with it a "severity" code that indicates the seriousness of the error (sometimes mere warnings or perfomance hints are returned). Also, as I suspected, it is not the case that failing to parse bitcode invalidates the library or context state.
The code that was aborting with the cruddy error message was just a stop-gap to let legacy apps which still called the old API functions work - it set up a context and a minimal error handler which behaves in this way.

objective-c: Expanded from macro

in Mopub-SDK for iOS.there is an error in calling "mp_safe_block" method.
Macro definition:
// Macros for dispatching asynchronously to the main queue
#define mp_safe_block(block, ...) block ? block(__VA_ARGS__) : nil
Called as:
mp_safe_block(complete, NSError.sdkInitializationInProgress, nil);
Error message:
Left operand to ? is void, but right operand is of type 'nullptr_t'
maybe this error has nothing to do with the SDK itself. how to fix it ?
PS:
that sdk code run correctly in a new xCode project created by myself. but there is an error in a Xcode-project builded by MMF2(clickTeam fusion)
and this xCode-project version is too old. I updateded setting of Xcode.but it still an error.

Prevent "Execution was interrupted, reason: internal ObjC exception breakpoint(-3)" on lldb

I've written some code that dumps all ivars of a class into a dictionary in Objective C. This uses valueForKey: to get the data from the class. Sometimes, KVC throws an internal exception that is also captured properly - but this disrupts lldb's feature and all I get is:
error: Execution was interrupted, reason: internal ObjC exception breakpoint(-3)..
The process has been returned to the state before expression evaluation.
There are no breakpoints set. I even tried with -itrue -ufalse as expression options, but it doesn't make a difference. This totally defeats for what I want to use lldb for, and it seems like such a tiny issue. How can I bring clang to simply ignore if there are internal, captured ObjC exceptions while calling a method?
I tried this both from within Xcode, and directly via calling clang from the terminal and connecting to a remote debug server - no difference.
I ran into the same issue. My solution was to wrap a try/catch around it (I only use this code for debugging). See: DALIntrospection.m line #848
NSDictionary *DALPropertyNamesAndValuesMemoryAddressesForObject(NSObject *instance)
Or, if you're running on iOS 7, the private instance method _ivarDescription will print all the ivars for you (similar instance methods are _methodDescription and _shortMethodDescription).
I met the same problem.
My solution is simply alloc init the property before assigning it to the value which caused the crash.
Myself and coworkers ran into this today, and we eventually found a workaround using lldb's python API. The manual way is to run script, and enter:
options = lldb.SBExpressionOptions()
options.SetTrapExceptions(False)
print lldb.frame.EvaluateExpression('ThisThrowsAndCatches()', options).value
This could be packaged into its own command via command script add.
error: Execution was interrupted, reason: internal ObjC exception breakpoint(-3).. The process has been returned to the state before expression evaluation.
Note that lldb specifically points to the internal breakpoint -3 that caused the interruption.
To see the list of all internal breakpoints, run:
(lldb) breakpoint list --internal
...
Kind: ObjC exception
-3: Exception breakpoint (catch: off throw: on) using: name = 'objc_exception_throw', module = libobjc.A.dylib, locations = 1
-3.1: where = libobjc.A.dylib`objc_exception_throw, address = 0x00007ff81bd27be3, unresolved, hit count = 4
Internal breakpoints can be disabled like regular ones:
(lldb) breakpoint disable -3
1 breakpoints disabled.
In case lldb continues getting interrupted you might also need to disable the conditions of the breakpoint:
(lldb) breakpoint disable -3.*
1 breakpoints disabled.
In my particular case there were multiple exception breakpoints I had to disable before I finally got the expected result:
(lldb) breakpoint disable -4 -4.* -5 -5.*
6 breakpoints disabled.

Using system symbol table from VxWorks RTP

I have an existing project, originally implemented as a Vxworks 5.5 style kernel module.
This project creates many tasks that act as a "host" to run external code. We do something like this:
void loadAndRun(char* file, char* function)
{
//load the module
int fd = open (file, O_RDONLY,0644);
loadModule(fdx, LOAD_ALL_SYMBOLS);
SYM_TYPE type;
FUNCPTR func;
symFindByName(sysSymTbl, &function , (char**) &func, &type);
while (true)
{
func();
}
}
This all works a dream, however, the functions that get called are non-reentrant, with global data all over the place etc. We have a new requirement to be able to run multiple instances of these external modules, and my obvious first thought is to use vxworks RTP to provide memory isolation.
However, no matter what I try, I cannot persuade my new RTP project to compile and link.
error: 'sysSymTbl' undeclared (first use in this function)
If I add the correct include:
#include <sysSymTbl.h>
I get:
error: sysSymTbl.h: No such file or directory
and if i just define it extern:
extern SYMTAB_ID sysSymTbl;
i get:
error: undefined reference to `sysSymTbl'
I havent even begun to start trying to stitch in the actual module load code, at the moment I just want to get the symbol lookup working.
So, is the system symbol table accessible from VxWorks RTP applications? Can moduleLoad be used?
EDIT
It appears that what I am trying to do is covered by the Application Programmers Guide in the section on Plugins (section 4.9 for V6.8) (thanks #nos), which is to use dlopen() etc. Like this:
void * hdl= dlopen("pathname",RTLD_NOW);
FUNCPTR func = dlsym(hdl,"FunctionName");
func();
However, i still end up in linker-hell, even when i specify -Xbind-lazy -non-static to the compiler.
undefined reference to `_rtld_dlopen'
undefined reference to `_rtld_dlsym'
The problem here was that the documentation says to specify -Xbind-lazy and -non-static as compiler options. However, these should actually be added to the linker options.
libc.so.1 for the appropriate build target is then required on the target to satisfy the run-time link requirements.