Arm v8 elf to mach-o - elf

I have a program in C and Arm v8 assembly(inline assembly). I've compiled it to produce a 64 bit arm statically linked elf. I need to be able to run this on an iPhone, but its giving me an error 'Cannot execute binary file'. This is because I'm trying to run an elf and not a mach-o which is needed for the iPhone.
Is there any converter which can convert an arm v8 elf to mach-o?
Most of what I've seen are from x86 elf to mach-o

You should instead consider setting up your tool-chain, compiler as well as linker correctly. You need a cross-toolchain for that target exactly.
If you were using GNU tools consider this:
Recompile your cross-compiler as well as bin-utils with a configure command with the proper target option (GNU-triplet).
For iPhone this should be: ../configure --target=arm64-apple-darwin . . . . a.s.on
(RTFM about setting up a cross-compiler toolchain.)
Hope that helps. In any case the correct GNU-triplet will determine also the file-header.

Related

Fat Mach-O Executable Multi-purpose?

I am currently working with Mach-O Executables on my Mac and A question just came across me, Can a single Fat Mach-O Executable file have multiple purposes? Eg.
Could I Have a single Mach-O Executable File with a Fat Header specifying 2 Executables:
Executable 1 : This executable could be a Dynamic Library allowing Its code to be loaded in external applications.
and
Executable 2 : This executable could be an Executable allowing It to be independently launched through Terminal or as an Application.
I just want to know, could this be possible to have 2 Executables with completely different functions inside a single Mach-O Binary File?
Yes it is possible, but hardly useful. Before I get to why, here's how to create one:
Take this C file:
#ifdef __LP64__
int main(void)
#else
int derp(void)
#endif
{
return 123;
}
Compile it as a 64-bit executable and a 32-bit shared library:
gcc -o t t.c -Wall
gcc -m32 -o t.dylib -Wall t.c -shared
And smash them together:
lipo -create -output t.fat t t.dylib
Now, why is that supposed to be not useful?
Because you're limited to one binary per architecture, and you have little to no control over which slice is used.
In theory, you can have slices for all these architectures in the same fat binary:
i386
x86_64
x86_64h
armv6
armv6m
armv7
armv7s
armv7k
armv7m
arm64
So you could smash an executable, a dylib, a linker and a kernel extension into one fat binary, but you'd have a hard time getting anything useful out of it.
The biggest problem is that the OS chooses which slice to load. For executables, that will always be the closest match for the processor you're running on. For dylibs, dylinkers and kexts, it will first be determined whether the process they're gonna be loaded into is 32- or 64-bit, but once that distinction has been made, there too you will get the slice most closely matching your CPU's capabilities.
I imagine back on Mac OS X 10.5 you could've had a 64-bit binary bundled with a 32-bit kext that it could try and load. However, outside of that I cannot think of a use case for this.

"curl_rule_01 declared as an array with negative size" error on built xcode 5 iOs7

I am trying to archive an iOS 7 App that is using BBHTTP-library which includes libCurl. The built-error:
curl_rule_01 declared as an array with negative size
The code-line in curlrules.h with the error:
[CurlchkszEQ(long, CURL_SIZEOF_LONG)];
I've tried it with these changes in curlbuild.h
#define CURL_SIZEOF_LONG 4
to
#define CURL_SIZEOF_LONG 8`
due to 64 bit, but it didn't change anything.
Be careful: you must NOT change these macros inside curlbuild.h! This header is generated at configure time and it records (among other things) which architecture is targeted.
If you look at the pre-built static library provided by BBHTTP you can see that it only targets ARMv7 and ARMv7s architectures:
$ otool -fV External/libcurl.iOS/libcurl.iOS.appstore.a | grep Archive
Archive : External/libcurl.iOS/libcurl.iOS.appstore.a (architecture armv7)
Archive : External/libcurl.iOS/libcurl.iOS.appstore.a (architecture armv7s)
These are 32-bit architectures. Please refer to BBHTTP Dependencies for more details regarding how this static library has been compiled.
If you build an iOS app with iOS 7 as deployment target you certainly have the default archs configured within your build settings. And these defaults include a 32-bit slice, plus a 64-bit slice:
So in such a case you must include a libcurl fat static library that also contains a 64-bit slice (a.k.a arm64).
The curl iOS build scripts from BBHTTP's author might help you. Otherwise please refer to Nick Zitzmann libcurl pre-built.

Building a cross-platform application (using Rust)

I started to learn Rust programming language and I use Linux. I'd like to build a cross-platform application using this language.
The question might not be related to Rust language in particular, but nonetheless, how do I do that? I'm interested in building a "Hello World" cross-platform application as well as for more complicated ones. I just need to get the idea.
So what do I do?
UPDATE:
What I want to do is the ability to run a program on 3 different platforms without changing the sources. Do I have to build a new binary file for each platform from the sources? Just like I could do in C
To run on multiple platforms you need to build an executable for each as #huon-dbauapp commented.
This is fairly straightforward with Rust. You use "--target=" with rustc to tell it what you want to build. The same flag works with Cargo.
For example, this builds for an ARM target:
cargo build --target=arm-unknown-linux-gnueabihf
See the Rust Flexible Target Specification for more about targets.
However, Rust doesn't ship with the std Crate compiled for ARM (as of June 2015). If this is the case for your target, you'll first need to compile the std Crates for the target yourself, which involves compiling the Rust compiler from source, and specifying the target for that build!
For information, most of this is copied from: https://github.com/japaric/ruststrap/blob/master/1-how-to-cross-compile.md
The following instructions are for gcc, so if you don't have this you'll need to install it. You'll also need the corresponding cross compiler tools, so for gcc:
sudo apt-get install gcc-arm-linux-gnueabihf
Compile Rust std Crate For ARM
The following example assumes you've already installed the current Rust Nightly, so we'll just get the sources and compile for ARM. If you are using a different version of the compiler, you'll need to get that to ensure your ARM libraries match the version of the compiler you're using to build your projects.
mkdir ~/toolchains
cd ~/toolchains
git clone https://github.com/rust-lang/rust.git
cd rust
git update
Build rustc for ARM
cd ~/toolchains/rust
./configure --target=arm-unknown-linux-gnueabihf,x86_64-unknown-linux-gnu
make -j4
sudo make install
Note "-j4" needs at least 8GB RAM, so if you hit a problem above try "make" instead.
Install ARM rustc libraries In native rustc build
sudo ln -s $HOME/src/rust/arm-unknown-linux-gnueabihf /usr/lib/rustlib/arm-unknown-linux-gnueabihf
Create hello.rs containing:
pub fn main() {
println!("Hello, world!");
}
Compile hello.rs, and tell rustc the name of the cross-compiler (which must be in your PATH):
rustc -C linker=arm-linux-gnueabihf-gcc-4.9 --target=arm-unknown-linux-gnueabihf hello.rs
Check that the produced binary is really an ARM binary:
$ file hello
hello: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), (..)
SUCCESS!!!:
Check: the binary should work on an ARM device
$ scp hello me#arm:~
$ ssh me#arm ./hello
Hello, world!
I've used this to build and link a Rust project with a separate C library as well. Instructions similar to the above on how to do this, dynamically or statically are in a separate post, but I've used my link quota up already!
The best way to figure this out is to download the source code for Servo and explore it on your own. Servo is absolutely a cross-platform codebase, so it will have to address all of these questions, whether they be answered in build/configuration files, or the Rust source itself.
It looks like the rust compiler might not be ready to build standalone binaries for windows yet (see the windows section here), so this probably can't be done yet.
For posix systems it should mostly Just Work unless you're trying to do GUI stuff.
Yes, you won't need to change the source, unless you are using specific libraries that are not cross-platform.
But as #dbaupp said native executables are different on each platform, *nix uses ELF, Windows PE, and OSX Mach-O. So you will need to compile it for each platform.
I don't know the state of cross-compiling in rust, but if they already implemented it, then you should be able to build all the binaries in the same platform, if not, you will have to build each binary on it's platform.

How to add header file path in CMake file

I am new to OpenCL. I have written a vector addition code in OpenCL with help from Internet. I have included one header file i.e. CL/cl.h using #include.
I am using NVIDIA graphic card and the OpenCL implementation is NVIDIA_GPU_Computing_SDK. My OpenCL header files are residing at this path /opt/NVIDIA_GPU_Computing_SDK/OpenCL/common/inc. I can run OpenCL programs through linux terminal by adding this path when compiling my code. But now I want to write CMake file for this code. CMake files are working fine for C programs, but not OpenCL programs because of this Path problem. In terminal, I used to enter $cmake ., after this $make, it will search for a Makefile which is created by cmake, now my error is after entering command make
fatal error: CL/cl.h: No such file or directory!
Now tell me how can I include this header file into CMake file?
You will need to put these lines into CMakeLists.txt:
include_directories(/opt/NVIDIA_GPU_Computing_SDK/OpenCL/common/inc)
link_directories(/opt/NVIDIA_GPU_Computing_SDK/OpenCL/common/<lib or something similar>)
add_executable(yourexe src1.c ...)
target_link_libraries(yourexe OpenCL)
But beware that this is not portable, because OpenCL SDK can be somewhere else on another machine. The proper way to do this is to use FindOpenCL.cmake module.
Maybe you can use a CMake "find" script like:
http://gitorious.org/findopencl/findopencl/blobs/master/FindOpenCL.cmake
http://code.google.com/p/opencl-book-samples/source/browse/trunk/cmake/FindOpenCL.cmake?r=14
CMake file example from OpenCL Programming Guide Book: http://code.google.com/p/opencl-book-samples/source/browse/trunk/CMakeLists.txt?r=14
I was looking for FindOpenCL.cmake macro which would work well on Windows, OSX and Linux... I couldn't find any which did work well on every platform, so I wrote new one which I use in couple of projects (webcl-validator and opencl-testsuite).
https://github.com/elhigu/cmake-findopencl
Especially Windows support is improved in this one.
In Windows it checks if 64bit or 32bit lib should be used and it also tries to find libraries from according to environment variables set by Nvidia, Intel and AMD OpenCL SDKs.
It also tries to find .lib in Cygwin, which didn't work with other scripts I tried.

Building C Runtime along with program ANTLR

I want to write a program using antlr (target C language) and I want to ship the library (C runtime distribution) to the package so that it can use on other machine without installing antlr on that machine. I've downloaded the latest version of this runtime on this link http://www.antlr.org/download/C . Could anyone please tell me if I can do that? Cheers.
Yes, you can link it statically but how to do this depends on your platform. For Linux with gcc or llvm you can do:
g++ main.c -Wl,-Bstatic -lantlr3c -Wl,-Bdynamic -l<other dynamic libraries>
Anything after -Bstatic will be included in the executable.
Depending on your jurisdiction, the C target license requires that you include the license text with your program in some way.
I've not used the C target but have used the C# target. I assume they work in a similar way.
You will need to deploy the C runtime library with your program as the generated parser and lexer will use functions in this library.
You don't need to install Antlr itself eg AntlrWorks or any .jar files