Why can an executable run on both Intel and AMD processors? - executable

How is it that an executable can work on both AMD and Intel systems. Aren't AMD's and Intel's instruction sets different? How does the executable work on both? How exactly do they compile the files to work like that. And what exactly is the role of the OS in all this?

The only real difference between AMD and Intel at a given processor iteration is their implementation of the instruction sets they support. x86 (32 bit) and x64 (64 bit) are the two most common instruction sets for Intel and AMD processors.
The differences come in when Intel and AMD implement the instruction sets in their chips - but these implementations should have no effect on the instruction sets themselves. So if a program was compiled for an x64 processor, it can run on any processor that implements the x64 instruction set, which almost all modern Intel and AMD processors implement.
A great example of an implementation difference is the way that Intel likes to hyperthread cores whereas AMD likes to just add more cores. They do this for a multitude of reasons, such as power consumption and better concurrent processing, but it doesn't really impact if programs run because it doesn't change the instruction set. Another difference between Intel and AMD is the number of pipeline stages, which can affect speed.
Huge complexities come into play when operating systems are considered. Windows has huge libraries that programs have to use if they want to run on windows. The same goes for Linux and Mac OS X. Since these libraries aren't shared between operating systems, programs written on one operating system probably won't run on another.

Essentially these days, compilation is done for the OS not for hardware, as most hardware have universal protocols and/or tech, as mentioned above, x86 or x64 machine code/opcodes/instruction sets, some programmers do make software designed to run better on certain hardware i.e optimized for AMD or Intel etc...
but still have other versions for other hardware
mainly due to the OS you need to worry about bit length and or running OS
most compilers or software makers tend to compile out to shared machine code instead of manufacturer specific, it should be remember different people use the same things in a different way, the guys in MIT, may decide to code their own OS for their needs and may want to use advance specific features of Intel ins tsrcution set some people fully re do their own androids etc...

Related

Why does QEMU use JIT compilation?

The TCG "accelerator" is used by QEMU when requesting full virtualization of a guest with a different hardware architecture (or with -accel=tcg).
TCG is a JIT compiler which emulates the guest architecture set by translating instructions and immediately invoking them at runtime. Portability depends on the list of architectures that TCG supports.
Would it be possible, realistically speaking, to compile an operating system into some efficient IR (similar to Java bytecode) and implement a virtual machine for that bytecode completely in software?
The short answer to "why does QEMU use JIT compilation" is "because it is faster than other ways to do it, like interpreting, but it can still handle any arbitrary guest binary". There has been some work done (not in QEMU itself, but by other projects or research work) on emulation by statically translating guest binaries into code for the host architecture, but this is tricky and you still have to be able to fall back to something like JIT to handle guest binaries that involve self-modifying code or which themselves are JITs (think of running a Java guest inside QEMU).
It is certainly possible to have an operating system which is compiled into an IR bytecode which then executes portably on a virtual machine on a variety of hosts. Historical examples of this include Taos (http://www.uruk.org/emu/Taos.html) and the UCSD p-System (https://en.wikipedia.org/wiki/UCSD_Pascal). Note that you would still here probably want to implement the bytecode-execution engine in such a VM using a JIT, because it's faster than interpreting the bytecode, and there might well be some host-CPU-specific bits of the VM implementation as a result.
However, that sort of portable-operating-system endeavour is an entirely separate idea from QEMU, whose purpose is to run under emulation existing pre-built binaries for a given guest CPU architecture.

How can I use QEMU to simulate mixed platforms?

Backgournd
There is a lot of documentation about using QEMU for simulating a system of particular architecture (a "platform").
For example, x86, ARM or RISCV system.
The first step is to configure QEMU target-list, for example ./configure --target-list=riscv32-softmmu.
It's also possible to provide multiple targets in the target-list, but apparently that builds an independent simulation for each specified platform.
My goal, however, is to simulate a system with mixed targets: an x86 machine which also hosts a RISCV embedded processor over PCI.
Obviously I need to implement a QEMU PCI device which would host the RISCV device on the x86 platform, and
I have a good idea how to implement a generic PCI device.
However, I'm not sure about the best approach to simulate both x86 and RISCV together on the same QEMU simulation.
One approach is to run two instances of QEMU (as two separate processes) and use some sort of IPC for communicating between the x86 and the RISCV simulation.
Another possible (?) approach could be to build RISCV QEMU as a loadable library and load it from x86 QEMU.
Perhaps it's even possible to have a single QEMU application that simulates both x86 and RISCV?
Yet another approach is not to use QEMU for simulating the RISCV device. I could implement a QEMU PCI device that completely encapsulates a RISCV simulation such as tiny-emu, but I would rather use QEMU for both x86 and RISCV.
My questions are:
Are there some guidelines or examples for a mixed-target QEMU project?
I've searched for examples but only found references to using QEMU as a single platform simulation, where first you choose which platform you would like to run.
What would be the best approach for simulating a mixed platform in QEMU? Separate QEMU processes with IPC? Or is there a way to configure QEMU in such a way that it could simulates a mixed platform?
Related
https://lists.gnu.org/archive/html/qemu-devel/2021-12/msg01969.html
QEMU does not support running multiple target architectures in the same QEMU process. (This is something we would in theory like to be able to do, but it would require a lot of reworking of core parts of QEMU which assume that the target architecture is known at compile time. So far nobody has felt it important enough to put in the significant development effort needed.)
So if you want to do this you'll need to somehow stitch together a QEMU process for the primary architecture with some other process to do the secondary architecture (QEMU or otherwise). This has been done (for instance Xilinx have an out-of-tree QEMU-based system that does this kind of thing with multiple QEMU processes) but I'm not aware of any easy off-the-shelf frameworks or setups to do it. I suspect that figuring out how time/clocks interact between the two simulations is one of the tricky aspects.
There is another option
you can start 2 QEMU processes and connect them through socket
Then you can create run script that start both of them in your order
its less "clock" accurate but good enough for virtual your HW
The other option is https://wiki.qemu.org/Features/MultiProcessQEMU
but you will need some hacking this experimental code
Use renode. It not only provides easy multi cpu simulation, but also hdl and multimachine simulation synchronozed in a single process.

What is a platform when we talk about embedded systems?

I am trying to learn linux porting, booting and other things and one thing that specially comes is platform. What is it
a cpu
a board
an overall term as board + cpu
like when we say platform specific code do we mean architecture(of cpu) specific code ?
The answer depends on context. If you are porting Linux, the platform from that point-of-view is the hardware you are porting it to. If you are writing applications to run on Linux on that hardware, then the platform is *both) the OS and the hardware.
Furthermore if you were targeting a GUI framework such as KDE or Gnome, that that would be "part of the "platform" too; or if you were running Java code, the platform would include the JVM.
Essentially it is the stuff that is already there that the code needs to pre-exist in order to run the code. Generally a platform might consist of layers; the "platform" as such comprises of whatever layers exist below that which you might be developing at that time.

Using OpenCL in linux and IDEs

For using OpenCL in linux should I have NVIDIA GPU?
In my computer I have an Intel GPU and i3 Intel CPU supported SSE3 and SSE4, I want program whith OpenCL in windows can I use an other IDE than "Visual Studio" for example "Code Blocks"?
Thank you
You can use OpenCL with any GPU, as it can run on a CPU as well (that's one of the strong points of OpenCL vs CUDA and the like).
But if you want OpenCL to actually use your GPU and not (or not only) your CPU, you will have to have a driver for your GPU which supports OpenCL, e.g. AMD or NVIDIA. Intel also lists Intel HD and Intel Iris graphic chips as supported through their OpenCL SDK, but you should probably check what you're actually running on if you want to make sure (e.g. check at the start of your program - see Appendix A).
Also, OpenCL has NOTHING to do with CPU extensions like SSE (specific implementations may use SSE/AVX/whatever CPU extension for better performance, but OpenCL does not require any of those per se), or with the IDE you use, and only very little with operating system. So you're free to use whichever IDE you want to (at the end, the IDE is only the editor you write your code with). In the case of Visual Studio people often tend to mix IDE with compiler, as Visual Studio uses it's own compiler per default, but afaik even there you're free to change it to e.g. the mingw or cygwin provided compiler, or use the icc. (Feel free to correct me on the Visual Studio part as I've only tested it once before completely wiping it forever)
Appendix A: How to check which devices can be used by OpenCL on a given system http://dhruba.name/2012/08/14/opencl-cookbook-listing-all-devices-and-their-critical-attributes/

If I wanted to develop algorithms for a purely RISC machine, what should my development environment be?

Short of buying a SPARC processor, what emulators are there? Thanks.
Pickup a second hand Power Mac G5 and you can run a fairly recent version of a mainstream OS (ie. OS X 10.5.8) and a modern development environment (Xcode 3.1.4).
You get a pretty fast, modern RISC machine running an OS that is still highly used (for the time being, I admit.)
You could also install Linux onto it if that would be better for your needs.
Probably a lot easier to find and cheaper than a SPARC machine.
You could also install the SPIM emulator for MIPS
On revisiting this, it's worth noting that nearly all modern smartphones run on ARM processors, which is short for 'Acorn RISC Machine'. So, an easy answer is 'Android Studio' or anything else targeting phone applications.
Similarly, there's a plethora of simple development boards available inexpensively, such as the BeagleBone Black and the Raspberry Pi, that also carry ARM processors.