I'm writing a Cocoa application and I'm trying to link it with the MATLAB Engine to call MATLAB functions. So far I've added the .app/extern/include/ directory (the one that contains the engine.h header) to the header search paths (and subsequently #imported engine.h) and added the .app/extern/lib/maci64 directory to the library search paths (though that doesn't really do anything). I've been looking through the MATLAB documentation and it looks like MATLAB has its own compiler 'mex' for MATLAB engine applications… but clearly that doesn't work for a Cocoa app (and anyway, on my system, the 'mex' command starts PDFTex and has nothing to do with MATLAB). Also, the engine libraries in that directory are in an odd format (.map) which seems to be a debugging symbol file and not a normal Mac library (dylib, a, framework, etc.). Thoughts?
Related
Is it possible to load .dll or static library(.a) file programmatially which returns assembly in objective-c for mac os x?
How assembly loading & unloading done in objective-c for mac osx?
I'll admit that even after reading Microsoft's documentation on the Assembly Class, it's still not clear to me what an Assembly is. They say:
Represents an assembly, which is a reusable, versionable, and self-describing building block of a common language runtime application.
The "reusable, versionable and self-describing" part sounds like a framework.
If indeed that's what you want to load, then you have a number of options. Your best best is to just link against the framework. The OS will automatically load it for you when your app starts up.
If you want to load it manually, there are a number of ways to do that. If it's a framework you're going to ship with your application, then you can simply put it into your app bundle's Frameworks folder, and then use:
NSBundle* frameworkBundle = [NSBundle bundleWithIdentifier:#"<your bundle's identifier>"];
if (frameworkBundle != NULL)
{
[frameworkBundle load];
}
You can also use dlopen() (see man dlopen(3) for details). This will load a dynamic library into your process space.
I have never had a reason to use dlopen() directly. I usually just link against the framework. On those rare occasions where my app may need to run on an older OS that doesn't support the framework, I have used the manual loading described above via NSBundle.
Does anyone know where I can find a Firebreath sample (either Mac OS X or Windows) that illustrates how to create a plugin that includes 1 or more other libraries (.DLLs or .SOs) that each rely on other sub-projects built as static libraries (LIBs)?
For example, let's say that the Firebreath plugin is called PluginA, and that PluginA calls methods from DLL_B and DLL_C. DLL_B and DLL_C are C++ projects. DLL_B calls methods from another project called LIB_D, and DLL_C calls methods from a project called DLL_E.
Therefore, the final package should contain the following files:
PluginA.dll
DLL_B.dll (which also incorporates LIB_D)
DLL_C.dll
DLL_E.dll
I am currently forced to dump all source files in the pluginA solution, but this is just a bottleneck (for example I cannot call libraries written in other languages, such as Objective-C on Mac OS X).
I tried following the samples on Firebreath, but couldn't get them to work, and I found no samples from other users that claimed they were able to get it to work. I tried using CMAKE, and also running the solutions directly from X-Code, but the end result was the same (received linking errors, after deployment DLL_C couldn't find DLL_E etc.)
Any help would be appreciated - thank you,
Mihnea
You're way overthinking this.
On windows:
DLLs don't depend on a static library because if they did it would have been compiled in when they were built.
DLLs that depend on another DLL generally just need that other DLL to be present in the same location or otherwise in the DLL search path.
Those two things taken into consideration, all you need to do is locate the .lib file that either is the static library or goes with the .dll and add a target_link_library call for each one. There is a page on firebreath.org that explains how to do this.
On linux it's about the same but using the normal rules for finding .so files.
What are the differences between the byte code binary executables such as Java class files, Parrot bytecode files or CLR files and machine code executables such as ELF, Mach-O and PE.
what are the distinctive differences between the two?
such as the .text area in the ELF structure is equal to what part of the class file?
or they all have headers but the ELF and PE headers contain Architecture but the Class file does not
Java Class File
Elf file
PE File
Byte code is, as imulsion noted, an intermediate step, right before compilation into machine code. Because the last step is left to load time (and often runtime, as is the case with Just-In-Time (JIT) compilation, byte code is architecture independent: The runtime (CLR for .net or JVM for Java) is responsible for mapping the byte code opcodes to their underlying machine code representation.
By comparison, native code (Windows: PE, PE32+, OS X/iOS: Mach-O, Linux/Android/etc: ELF) is compiled code, suited for a particular architecture (Android/iOS: ARM, most else: Intel 32-bit (i386) or 64-bit). These are all very similar, but still require sections (or, in Mach-O parlance "Load Commands") to set up the memory structure of the executable as it becomes a process (Old DOS supported the ".com" format which was a raw memory image). In all the above, you can say , roughly, the following:
Sections with a "." are created by the compiler, and are "default" or expected to have default behavior
The executable has the main code section, usually called "text" or ".text". This is native code, which can run on the specific architecture
Strings are stored in a separate section. These are used for hard-coded output (what you print out) as well as symbol names.
Symbols - which are what the linker uses to put together the executable with its libraries (Windows: DLLs, Linux/Android: Shared Objects, OS X/iOS: .dylibs or frameworks) are stored in a separate section. Usually there is also a "PLT" (Procedure Linkage Table) which enables the compiler to simply put in stubs to the functions you call (printf, open, etc), that the linker can connect when the executable loads.
Import table (in Windows parlance.. In ELF this is a DYNAMIC section, in OS X this is a LC_LOAD_LIBRARY command) is used to declare additional libraries. If those aren't found when the executable is loaded, the load fails, and you can't run it.
Export table (for libraries/dylibs/etc) are the symbols which the library (or in Windows, even an .exe) can export so as to have others link with.
Constants are usually in what you see as the ".rodata".
Hope this helps. Really, your question was vague..
TG
Byte code is a 'halfway' step. So the Java compiler (javac) will turn the source code into byte code. Machine code is the next step, where the computer takes the byte code, turns it into machine code (which can be read by the computer) and then executes your program by reading the machine code. Computers cannot read source code directly, likewise compilers cannot translate immediately into machine code. You need a halfway step to make programs work.
Note that ELF binaries don't necessarily need to be machine/arch specific per se.
The interesting piece is the "interpreter" header field: it holds a path name to a loader program that's executed instead of the actual binary. This one then is responsible for loading the actual program, loading and linking libraries, etc. This is the way how eg. ld.so comes in.
Theoretically one could create an ELF binary that holds java bytecode (or a complete jar). This just needs some appropriate "interpreter" program which starts up a JVM and loads the code from the binary into it.
Not sure whether this actually has been done before, but certainly possible.
The same can be done w/ quite any non-native code.
It also could serve for direct multiarch support via some VM like qemu:
Let the target platform (libc+linker scripts) put the arch name into the interpreter program name (eg. /lib/ld.so.x86_64, /lib/ld.so.armhf, ...).
Then, on a particular arch (eg. x86_64), the one with native arch name will point to the original ld.so, while the others point to some special one that calls up something like qemu-system-XXX.
Just leave alone the Apple policy, just talking about the Objective-C language only,
Assume that my programme calling a .a library. Is this possible to grep the .a from the
internet, and run a newer version of .a instead of old .a?
Thanks.
Not for statically linked libraries (.a), at least with any level of sanity. You can certainly do it with dynamically loaded libraries (.so); it's one of the normal use cases. Have a look at dlopen, dlclose and dlsym from the dynamic loader (https://developer.apple.com/library/mac/#documentation/DeveloperTools/Reference/MachOReference/Reference/reference.html).
This is not just iOS, but OS X apps (and probably other Unixes in general)
Static libraries (.a files) cannot be replaced while the program is running because they are part of the application binary. The application binary is mapped into the process's address space. If you try to change any part of it, you'll almost certainly end up crashing the app.
Dynamic libraries (.so files) are replaceable in theory. However, most applications load them up once at the beginning or when first needed and then they become part of the application's address space. I've heard that it is theoretically possible for an application to unload a dynamic library, but I've never seen it done in any real Cooca application.
I'm trying to create an D application which uses a (third party) COM .dll so I can scrape a text box of another application so I can sound an error when a certain string shows up.
However the third party doesn't provide .lib, .def or .h files that go with the dll (atleast with the free trial version). I can create the .lib file with the implib tool but I don't see any of the library's functions in the created .lib.
Their (visual c++) samples use the #import directive to link it in however that is of no use for me ...
On a side note how can I get the proper interfaces (in a .di with boilerplate that does the linking) of the dll automatically? I ask so the correctness of the linkage doesn't depend on my (likely to be incorrect) translation of the functions. They do have a webpage which gives all functions but the object model is a bit chaotic to say the least.
From what I know, COM libraries only expose a few functions, required to (un)register the library and to create objects.
You can however view the interfaces and functions in a COM .dll using the OLE/COM Object Viewer. It seems it might be able to output header files (.h). Afterwards, maybe you could use htod as a starting point to converting everything to D interfaces.
The DMD distribution seems to include a .COM sample (chello.d, dclient.d, dserver.d), and at first glance it doesn't look like it would require any LIBs explicitly.
Unfortunately, I've never actually used COM in D, so I can't advise any further. I hope this helps in some way.
While I have yet to actually do COM work myself, I am trying to revive Juno over on Github/he-the-great. Part of the project is tlbimpd which is what will output a D file from a DLL.
I've tested the examples and successfully run tlbimpd. Please do try things out for your use and submit any issues.