Detect CPU Architecture (32-bit / 64-bit) runtime in Objective C (Mac OS X) - objective-c

I'm currently wring a Cocoa application which needs to execute some (console) applications which are optimized for 32 and 64 bit. Because of this I would like to detect what CPU architecture the application is running on so I can start the correct console application.
So in short: how do I detect if the application is running on a 64 bit OS?
Edit: I know about the Mach-O fat binaries, that was not my question. I need to know this so I can start another non bundled (console) application. One that is optimized for x86 and one for x64.

There is a super-easy way. Compile two versions of the executable, one for 32-bit and one for 64-bit and combine them with lipo. That way, the right version will always get executed.
gcc -lobjc somefile.m -o somefile -m32 -march=i686
gcc -lobjc somefile.m -o somefile2 -m64 -march=x86_64
lipo -create -arch i686 somefile -arch x86_64 somefile2 -output somefileUniversal
Edit: or just compile a universal binary in the first place with gcc -arch i686 -arch x86_64
In response to OP's comment:
if(sizeof(int*) == 4)
//system is 32-bit
else if(sizeof(int*) == 8)
//system is 64-bit
EDIT: D'oh! I didn't realise you'd need runtime checking... Going through the output of sysctl -A, two variables look potentially useful. Try parsing the output of sysctl hw.optional.x86_64 and sysctl hw.cpu64bit_capable . I don't have a 32-bit Mac around to test this, but both these are set to 1 in Snow Leopard on a Core2Duo Mac.

Use [[NSRunningApplication currentApplication] executableArchitecture] which returns one of the following constants:
NSBundleExecutableArchitectureI386
NSBundleExecutableArchitectureX86_64
NSBundleExecutableArchitecturePPC
NSBundleExecutableArchitecturePPC64
For example:
switch ([[NSRunningApplication currentApplication] executableArchitecture]) {
case NSBundleExecutableArchitectureI386:
// TODO: i386
break;
case NSBundleExecutableArchitectureX86_64:
// TODO: x86_64
break;
case NSBundleExecutableArchitecturePPC:
// TODO: ppc
break;
case NSBundleExecutableArchitecturePPC64:
// TODO: ppc64
break;
default:
// TODO: unknown arch
break;
}

You don't have to detect it manually to achieve that effect. One Mach-O executable file can contain both binaries for 32 bit and 64 bit intel machines, and the kernel runs the most appropriate ones automatically. If you're using XCode, there's a setting in the project inspector where you can set the architectures (ppc, i386, x86_64) you want to have in a single universal binary.
Also, remember that on OS X, running a 64-bit kernel (with Snow Leopard) and being able to run a 64-bit user land app are two orthogonal concepts. If you have a machine with 64 bit cpu, you can run a user-land program in a 64-bit mode even when the kernel is running in the 32 bit mode (with Leopard or Snow Leopard), as long as all of the libraries you link with are available with 64 bit. So it's not that useful to check if the OS is 64-bit capable.

Usually, you shouldn't need to be able to check at runtime whether you're on 64 or 32 bits. If your host application (that's what I'd call the app that launches the 64 or 32 bit tools) is a fat binary, the compile-time check is enough. Since it will get compiled twice (once for the 32-bit part of the fat binary, once for the 64-bit part), and the right one will be launched by the system, you'll compile in the right launch code by just writing sth. like
#if __LP64__
NSString *vExecutablePath = [[NSBundle mainBundle] pathForResource: #"tool64" ofType: #""];
#else
NSString *vExecutablePath = [[NSBundle mainBundle] pathForResource: #"tool32" ofType: #""];
#endif
[NSTask launchedTaskWithLaunchPath: vExecutableName ...];
If the user somehow explicitly launches your app in 32 bits on a 64 bit Mac, trust them that they know what they're doing. It's an edge case anyway, and why break things for power users out of a wrong sense of perfection. You may even be happy yourself if you discover a 64-bit-only bug if you can tell users the workaround is launching as 32 bits.
You only need a real runtime check if your app itself is only 32 bits (e.g. Carbon GUI with command-line helper). In that case, host_processor_info or sysctl or similar are probably your only route if for some weird reason you can't just lipo the two executables together.

If you are on Snow Leopard use NSRunningApplication's executableArchitecture.
Otherwise, I would do the following:
-(BOOL) is64Bit
{
#if __LP64__
return YES;
#else
return NO;
#endif
}

To programmatically get a string with the CPU architecture name:
#include <sys/types.h>
#include <sys/sysctl.h>
// Determine the machine name, e.g. "x86_64".
size_t size;
sysctlbyname("hw.machine", NULL, &size, NULL, 0); // Get size of data to be returned.
char *name = malloc(size);
sysctlbyname("hw.machine", name, &size, NULL, 0);
// Do stuff...
free(name);
To do the same thing in a shell script:
set name=`sysctl -n hw.machine`

The standard way of checking the os version (and hence whether its snow leopard, a 64-bit os) is detailed here.

Related

CMake Configuration issue: Problem enabling 64 bit Fortran compilation on Windows using Intel OneAPI Compilers

I am trying to enable 64 bit integer size for a sample hello world kind of Fortran test code, in an MPI setup.
OS: Windows 10
Compilers used: Intel OneAPI 2021.4.0
MPI: Intel MPI
There are 2 scenarios I tried to test,
using a single line command to compile an executable directly,
mpiifort -o test.exe -i8 test.f90
using a CmakeLists file to compile the test with necessary Fortran 64 bit "-i8" option and find_package(MPI). I made sure MPI libraries are picked (impi.lib, libmpi_ilp64.lib) and MPI_Fortran_Compiler points to mpiifort
I use a test of determining the size of MPI_INTEGER using an API MPI_Type_size( ) to check if 64 bit environment is enabled for Fortran/MPI setup.
call MPI_Type_size(MPI_INTEGER, sz, ierrsiz)
print *, 'sizeof(MPI_INTEGER) ', sz
Scenario 1 prints correct size of 8 bytes (64 bit)
Scenario 2 prints incorrect size of 4 bytes ( 32 bit)
I do use "-i8" option in Cmake build system to enable 64 bit environment. But MPI still seems to be 32 bit.
Kindly help.

What is EM_SPARC32PLUS for?

I found that Linux and GNU Binutils define a special machine type EM_SPARC32PLUS in ELF header. Why is it needed? What makes SPARC V8+ so special that it can not use EM_SPARC?
I think there should be an important reason for new machine type, because it breaks compatibility with old programs, and all other architectures tend to use the old machine type as long as possible.
Starting with elf-em.h, we see the following (cherry-picked) entries:
#define EM_SPARC 2
#define EM_SPARC32PLUS 18 /* Sun's "v8plus" */
#define EM_SPARCV9 43 /* SPARC v9 64-bit */
Some Googling led me to this reference page for Sun Studio 12, which says:
v8plus
Compile for the V8plus version of the SPARC-V9 ISA. By definition, V8plus means the V9 ISA, but limited to the 32–bit subset defined by the V8plus ISA specification, without the Visual Instruction Set (VIS), and without other implementation-specific ISA extensions.
This option enables the compiler to generate code for good performance on the V8plus ISA.
The resulting object code is in SPARC-V8+ ELF32 format and only executes in a Solaris UltraSPARC environment—it does not run on a V7 or V8 processor.
Example: Any system based on the UltraSPARC chip architecture
It seems to be essentially the 32-bit version of the V9 architecture for the UltraSPARC.
See also:
Can 32-bit SPARC V8 application run on 64-bit SPARC V9?

CPU killed by SIGXCPU using OpenCL and mono

I have got very similar problem to this one stated here : Intel CPU OpenCL in Mono killed by SIGXCPU (Ubuntu)
Essentially, I have a very simple C# application using OpenCL (through OpenCL.Net wrapper, but it shouldn't make a difference as it is merely wrapping native functions and nothing more). In the code I just build kernel and then allocate a big array of floats.
To be more specific my platform: It is Ubuntu 12.04, OpenCL 1.1 (with CUDA) and mono 3.0.3.
Problem: When running my code through mono i get CPU LIMIT EXCEEDED error
Few things:
If I set a breakpoint (in monodevelop) somewhere between building the kernel and allocation it works..
Changing array size to small one also makes it work
Strace doesn't show anything useful. I tried also passing a callback to ClBuildProgram (to note: if I comment out line with ClBuildProgram it works).
Any ideas?
That's what worked for me in the end.
There is a major problem with mono - it uses SIGXCPU for GC handling (which is strange btw). Unfortunately OpenCL uses it as well so it conflicts.
Workaround is to modify mono code.
Go to source directory and grep -r SIGXCPU . In my mono (3.0.3) there were 2 imporant files
./libgc/pthread_stop_world.c:# define SIG_THR_RESTART SIGXCPU
./mono/metadata/sgen-os-posix.c:const static int restart_signal_num = SIGXCPU;
Replace SIGXCPU with SIGWINCH and recompile. One note is that I am not sure if it didn't break something, but for now looks OK and OpenCL problem is gone. If it breaks something (like gui) replace SIGWINCH with different signal that you have (signals.h for signals defs)

JNI - compile dll as 64 bit

I compile my .dll with the following command: gcc -mno-cygwin -I"/cygdrive/c/Program Files/Java/jdk1.7.0_04/include" -I"/cygdrive/c/Program Files/Java/jdk1.7.0_04/include/win32" -Wl,--add-stdcall-alias -shared -o CalculatorFunctions.dll CalcFunc.c
I use GlassFish for Eclipse. The whole system is a CORBA client-server. When I start the server from Eclipse - it's fine. But when I try to run the server from the CMD (because I want to set a port and host address for the server) it gives me: Exception: ... .dll: Can't load AI 32-bit .dll on a AMD 64-bit platform
I searched through other topics and saw that I should try with changing my JDK to 32 bit - didn't work again.
So the other solution I read about is to compile the .DLL as 64 bit. What command I need to use or how I do that at all ?
Thanks in advance! :)
You need not only a command but whole 64-bit MinGW toolchain - a 64bit compiler in the first place. Then the parameters to your gcc invocation should work the same.
Beware that 64bit is not just a matter of compilability. Primitive data types have different sizes, so any code making assumptions without sizeof checking is a potential issue. Most prominently, pointer arithmetic.

Simple DLL injection not working using AppInit_DLLs. DllMain() not getting called

I've written the simplest injection dll possible. Here is the code in its entirety:
#include "stdafx.h"
#include <stdio.h>
BOOL APIENTRY DllMain(HANDLE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved)
{
FILE * File = fopen("D:\\test.txt", "w");
if(File != NULL)
{
fclose(File);
}
return TRUE;
}
Super simple right? Well I can't even get this to work. This code compiles to a dll and I've placed the path to this dll in the registry under [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs]. I should also mention that LoadAppInit_DLLs registry value is set to 1. From doing this I expect to see the file "D:\test.txt" appear when I start other applications (like notepad.exe), but it doesn't. I don't get it. There is another .dll, which is very old and written in visual studio '97, (which I'm trying to replace) that works just fine when I set AppInit_DLLs to point to it and start an arbitrary application. I can tell that it is getting loaded when other applications are started.
I'm not sure whats going on here, but this should work shouldn't it? It can't get any simpler. I'm using VS 2010, by all accounts I think I've created a very plane Jane .dll so I don't think any project settings should be out of whack, but I'm not completely sure about that. What am I missing here?
Setup Info
OS: Windows 7 64-bit
OS Version: 6.1.7601 Service Pack 1 Build 7601
IDE: Visual Studio 2010
IDE version: 10.0.40219.1 SP1Rel
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs]
is NOT the registry key used for injection for into 32-bit processes. Its the registry key if your OS is 32-bit.
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs] is the correct registry key to use if your OS is 64-bit.
I was under the assumption that the former was for 32-bit processes and the latter was for 64-bit processes. But really, the OS is going to ignore one of those registry keys depending on whether or not the OS itself is 64-bit or 32-bit.
#Ultratrunks: This is not completely correct.
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs ] is for both 32 as well as 64 bit OS.
But If we want to run 32 bit processes on 64 bit machine then we need to modify the following registry key-
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs]
Wow is basically concept of making 64 bit system to be compatible of running 32 bit processes.
I verified it after running my programs on both 32 as well as 64 bit OS and running 32 bit processes on 64 bit machine.
Hence
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs for 32/64 bit OS
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs for 32 bit processes on 64 bit OS
First at all about the SOFTWARE\Microsoft vs SOFTWARE\Wow6432Node\Microsoft it's true that if both 32 or 64 so go into SOFTWARE\Microsoft and if you want to inject 32 bit dll in OS64 so go into SOFTWARE\Wow6432Node\Microsoft.
My problem was that the value need to be up to 8 characters and if there is in the path or name above this you need to use shortcut.
Example: if your dll name is inject~1.dll
Don't Forget to set all three reg value
AppInit_DLLs -> dllname if is in system32 or full path with out "
LoadAppInit_DLLs -> 1
RequireSignedAppInit_DLLs -> 0