I have some old linux code I'm trying to port to Windows.
When I first built it as a straight native DLL, I go no issues with this piece of code, but when I tried making it a mixed-mode C++/CLI DLL, I got an unresolved external object error on this:
extern "C" char** environ;
Why would this work for native and not CLI?
Any idea how to work around this, or what it even does?
That holds the environment variables (PATH, etc, etc). The C standard (if i recall correctly) requires environ to point to an array of these variables. They're also passed as the 3rd argument to the main entry point function.
Apparently, for some reason, the C++/CLI doesn't initialize that.
To fix that, you can allocate it yourself and fill with either either getenv (C) or Environment.GetEnvironmentVariables (Managed C++). I don't know of any in-place fix, but it shouldn't be too hard.
Related
I'm calling a function from a compiled C library from a 3rd party developer which i cannot divulge. The problem is that function returns an error when the app is built using Xcode 8, but works ok when using Xcode 7. I'm calling the library function from a .mm file in my application. I know there's a possibility that this is caused by the 3rd party library, but what are the changes in the Xcode versions which might be affecting this? I have no idea where to start, and cannot paste code here.
I figured a workaround for this but still don't know why it behaves like that.
I found the cause of the error was the Optimization Level. When in XCode 8, I need to set optimization level for it to work, or else it fails.
The specific source code that fails is this:
char subject[256];
memset(&subject, 0x00, sizeof(subject));
strcpy(subject, "Test");
mail.emailSubject = subject
I replaced above code with this:
mail.emailSubject = (char*)"Test";
If anyone can explain, please feel free. thanks!
The first block of code is allocating a stack char array and then assigns it to emailSubject which I don't know what type of object is. If it does not copy the stack memory but simply assigns it then when the function returns the stack memory will be deallocated and emailSubject will hold dirty memory.
When assigning "Test" the compiler will allocate "Test" as a static variable which is not deallocated after function returns.
Hope it helps.
I have heard that C doesn't have closure, and today I saw the use of closure in Objective-C. Is closure supported in Objective-C and not in C?
Update: thanks for all the answers. I found this guide on the web on blocks as well: http://pragmaticstudio.com/blog/2010/7/28/ios4-blocks-1
Apple added the ^ operator to add closure support. It is not tied to Objective-C however, and can be used in C and C++ as well, as long as you compile the project with Apple's brach of GCC or LLVM. This new feature is called blocks.
C has closures in the form of application-defined structures that contain both a function pointer and data pointer. The problem is just that many/most interfaces that take a callback pointer (like qsort) accept only the function pointer and not a corresponding data pointer, making it impossible to pass closures to them.
By the way, it's theoretically possible to add closure support at the library level without assistance from the compiler, i.e. create a library that would return a function pointer to a closure. However, the library code would be rather implementation/machine-dependent. It would need to allocate space for executable code and generate code to pass a fixed pointer value (saved as part of the closure object) along with other arguments to the function.
I've recently discovered the following in my code:
for (NSInteger i; i<x; i++){
...
}
Now, clearly, i should have been initialised. What I find strange is that while in "debug" profile (XCode), this error goes undetected and the for loop executes without issue. When the application is released using the "release" profile, a crash occurs.
What flags are responsible for letting this kind of mistake execute in "debug" profile?
Thanks in advance.
This could be considered a Heisenbug. A declaration without an initialization will typically allocate some space in the stack frame for the variable and if you read the variable you will get whatever happened to be at that location in memory. When compiled for the debug profile the storage for variables can shift around compared to release. It just happens that whatever is in that location in memory for debug mode does not cause a crash (probably a positive number) but when in release mode it is some value that causes a crash (probably a negative number).
The clang static analyser should detect this. I have the analyse when building option switched on always.
In the C language, using an initialized variable isn't an error but an Undefined Behavior.
Undefined behavior exists because C is designed to be a very efficient low-level language. Using an initialized variable is undefined behavior because it allows the compiler to optimize the variable allocation, as no default value is required.
But the compiler is licensed to do whatever he wants when an undefined behavior occurs. The C Standard FAQ says:
Anything at all can happen; the Standard imposes no requirements. The program may fail to compile, or it may execute incorrectly (either crashing or silently generating incorrect results), or it may fortuitously do exactly what the programmer intended.
So any implementation of an undefined behavior is valid (even if it produces code that formats your hard drive).
Xcode uses different optizations for Debug and Release configurations. Debug configuration has no optimization (-O0 flag) so the compiled executable must stays close to your code, allowing you to debug it more easily. On the other hand, Release configuration produces strongly optimized executables (-Os flag) because you want your application to run fast.
Due to that difference, undefined behaviours may (or may not) produce different results in Release and Debug configurations.
Though the LLVM compiler is quite verbose, it does not emit warnings by default for undefined behaviors. You may however run the static analyzer, which can detect that kind of issues.
More information about undefined behaviors and how they are handled by compilers in What Every Programmer Should Know About Undefined Behavior.
I doubt it is so much flags as the compiler is optimizing out the "unused" variable i. Release mode includes far more optimizations then debug mode.
Different compiler optimizations may or may not use a different memory location or register for you uninitialized variable. Different garbage (perhaps from previously used variables, computations or addresses used by your app) will be left in these different locations before you start using the variable.
The "responsibility" goes to not initializing the variable, as what garbage is left in what locations may not be visible to the compiler, especially in debug mode with most optimatizations off (e.g. you got "lucky" with the debug build).
i has not been initialized . You are just declaring the i variable not initializing the variable.
Writing just NSInteger i; just declares a variable not initializes it.
You can initialize the variable by below mentioned code.
for (NSInteger i=1; i<x; i++){
...
}
Single stepping through code that uses any of the NS_INLINE functions from NSGeometry.h causes the debugger to lose sync with the current instruction pointer, making debugging the routines very difficult.
I've tried #undef NS_INLINE at the top of my implementation file,#define NS_INLINE in the precompiled header, looking for pragmas, compiler switches, etc., but no matter what, the functions always compile inline for my debug builds.
FWIW - NSMakeRect, NS_MakeSize, etc. all compile inline.
Question is, how do I get NS_INLINE to compile to nothing?
NS_INLINE is wrapped in #if !defined(NS_INLINE). You just need to define it appropriately before you include the Foundation headers. Glancing at the original declaration, you'll probably just need to remove __attribute__((always_inline)) for the debugger to catch your symbols (assuming you're generating all debug symbols and running a debug build - if not, then you could do a little more work to get them to all be visible. Ideally, you'll just create your own label local to your project/group/libs so you can debug your own code more easily.
Is this really about stepping through built-in API functions, or are you trying to do something similar in your own code? If it's the former, is it just due to curiosity, or some other reason for debugging? If it's the latter, I'd suggest commenting out the NS_INLINE while debugging. If you're trying to change the inlining behavior of existing API functions, you may be disappointed, and there are probably better ways to go about it. If your intent is something else, please clarify so we can answer adequately.
I would like to pass some (dll or not) function pointers as arguments to some dll functions and call them from inside of the dll. I wonder if it is safe because I have found an information on http://publib.boulder.ibm.com/infocenter/zos/v1r10/index.jsp?topic=/com.ibm.zos.r10.cbcpx01/fpref.htm that:
In DLL code, it is assumed that a function pointer points to a function descriptor. A function pointer call is made by first obtaining the function address through dereferencing the pointer; and then, branching to the function entry. When a non-DLL function pointer is passed to DLL code, it points directly to the function entry. An attempt to dereference through such a pointer produces an undefined function address. The subsequent branching to the undefined address may result in an exception.
Does this rule apply to Visual Studio and other compilers as well?
What precisely I am trying to do is to solve the problem of memory allocation and deallocation between various dll and non-dll functions. My idea is to pass two function pointers - for common allocation and deallocation functions - to every dll in some initialization (e.g. Initialize(&malloc, &free)) and then do all memory management using these common and thus always compatible functions.
It's not true. DLL code treats function pointers in exactly the same way as non-DLL code. If that were not the case, one could not use standard library functions like qsort() (which expects a function pointer argument) within a DLL.
Passing function pointers to a DLL has been done for generations.
All the callback functions used in GUI programming (for example progress bars update) use function pointers.
When developing using WIN32, is there a reason you want to use function pointers to malloc/free? Why are you not simply using malloc/free directly?
That is how I have always done it in the past. Not that this guarantees it's the correct way.
In fact, you might take that as an indication that it's the worst possible way :)