I'm developing for embedded system using custom toolchain file. CMAKE_SYSTEM_NAME is set to "Generic", so WIN32, UNIX, etc. variables can't be used to check operating system on which project is configured. But I need to configure project differently on different OS.
How can I determine the operating system where cmake is executed?
Check the variables that describe the system.
Of particular interest is the CMAKE_HOST_SYSTEM variable and its relatives. Unfortunately, the exact behavior of these is largely dependent on the used platform and toolchain. If you don't get sensible values for your environment, consider writing to the CMake mailing list to request better support for your platform.
Related
(Using C++) How would software developers make a profitable program to distribute? Would they used pre-compiled binaries within the project directory or something? Just learning wxWidgets and I want to make an application to put on a website.
#pizzadog,
The answer depends on the target platform.
If you are on Windows, you create a binary and use a special software to create a distribution.
If you are on *nix - the answer depends whether you want to allow the end-user to compile the software. If you do - you put the link to the source on the site and create a documentation on how to build it. If you don't - you create a special rules file depending on the *nix distribution you are targeting.
If you are on Mac - you put the application bundle for download and clearly state what is the minimal OSX version the program will run.
HTH.
I have an embedded project for ARM platform, specifically aarch64.
Up until now I was using Make. I recently set up CMake with no particular issues.
I moved to CMake because I was under the impression it was a more modern build tool that would have allowed a smarter configuration.
For example, I can compile my project using different toolchains (aarch64-elf-gcc-linaro, aarch64-linux-gnu-gcc,...) and I would like CMake to try if any of those are installed on the system and use whichever is found first by default.
Is this possible (or meant to)? I'd expect it to be an easy feat for the tool, but after searching for a while I can't seem to find the right track.
Yes, you can make your CMake project to search for available tool-chains installed in your OS, choose one and compile your project. I also write a CMake program for ARM embedded project, because now it is universal transferable between different OS system Windows and Unix. On Linux there is ARM ToolChain installed and on Windows there is Keil-MDK. If you have different tool-chains to choose between, you can write CMake script which will find paths with command like find_path() and then call correct "toolchianxx.cmake" script with right compiler flags for chosen compiler.
In your particular problem just use find_path commands and use hits to find installed compilers in "pre-set" known paths.
I am new to angstrom and linux. I am in the process of building an sdk with angstrom. But I cant understand what is a Toolchain and its functionality. Please someone help me here.
In software, a toolchain is the set of programming tools that are used to create a product (typically another computer program or system of programs). The tools may be used in a chain, so that the output of each tool becomes the input for the next, but the term is used widely to refer to any set of linked development tools.
A simple software development toolchain consists of a compiler and linker to transform the source code into an executable program, libraries to provide interfaces to the operating system, and a debugger.
http://en.wikipedia.org/wiki/Toolchain
I am a newbie to embedded developement, as figure shown. I have a small ARM board, AT91SAM7-EX256. I have also a JTAG programmer dongle, too. I am using Linux (Ubuntu x86_32) on my notebook and desktop machine. I'm using CodeSourcery Lite for cross-compiling to ARM-Linux.
Am I right that I can't use this Linux-target cross-compiler to make binary or hex files for the small ARM board (it comes without any operating system)? Should I use the version called ARM EABI instead?
As I see, it's a "generic" ARM compiler. I've read some docs, and there're lot of options to specify the processor type and instruction set (thumb, etc.), there will be no problem with it. But how can I tell the compiler, how should the image (bin/hex) looks like for the specific board (startup, code/data blocks etc.)? (In assemblers, there're the org and load directives for it.)
What software do I need to capture some debug messages from the board on my PC? I don't want to on-board debugging, I just need some detailed run-time signal, more than just blinking leds.
I have an option to use MS-Windows, I can get a dedicated machine for it. Do you recommend it, is it much easier?
Can I use inline assembly somehow in my C code? I dunno anything about that. Can I use C++ or just C?
I have also a question, which don't need to answer: are there really 4096 kind of GNU compilers and cross-compilers (from Linux_x86_32 -> Linux_x86_32, Linux_x86_32 -> Linux_ARM, OSX -> Linux_ARM, PPC_Linux -> OSX) and 16 different GNU compiler sources (as many target platforms/processors exists) around? The signs says "yes", but I can't believe it. Correct me, and show me the GNU compiler which can produce object file for any platform/processor, and the universal linker which can produce executable for any platform.
While Windows is not a "better" platform do this kind of embedded development on, it may be easier to start with since you can get a pre-built environment to work with. For example, Yagarto (which I would recommend).
Setting up an embedded development environment on Linux can require a considerable amount of knowledge, but it's not impossible.
To answer your questions:
Your Linux cross-compiler comes with libraries to build executables for a Linux environment. You have hinted that you want to build a bare-metal executable for this board. While you can do this with your compiler, it will just confuse things. I recommend building a baremetal cross-compiler. Since you're building your own baremetal executable (and thus you are the operating system, the ABI doesn't matter since you're generating all of the code and not interoperating with other previously built code.
There are several versions of the ARM instruction set (and Thumb). You need to generate code for your particular processor. If you generate the code for a newer version of the instruction set, you will likely generate code which generates a reserved instruction exception. Most prebuilt gcc cross-compiler toolchains for ARM are "multilib" and will build for a variety of architectures in both ARM and Thumb.
Not sure exactly what you're looking for here. This is a bare metal platform. You can use the debugger channel to send messages if you're debugging on target, or you'll need to build your own communication channel into the firmware you write (i.e. uart support).
See above.
Yes. See here for details on gcc's extended inline assembly syntax. You can do this in C++ and C. You can also simply link pure assembly files.
There is no universal gcc compiler / linker. You need a uniquely built compiler for each host / target combination you use.
Finally, please take a look at Atmel's documentation. They have a wealth of information on developing for this target as well as a board package with the needed linker directives and example programs. Note of course the package is for Atmel's own eval board, but it will get you started.
http://sam7stuff.blogspot.com/
I use either of the codesourcery lite versions. But I have no use for the gcc library nor a C library, I just need a compiler.
In the gcc 3 days newlib was great, modify two files worth of system support (simple open, close, read, putc type stuff) and you could compile just about anything, but with gcc 4.x you cannot even go back and cross compile gcc 3.x, you have to install an old linux distro in a virtual machine.
To get the gcc library yes you probably want to use the eabi version not the version with linux gnueabi in the file names.
You might also consider llvm (if you dont need a C library, and you will still need binutils), hmm, I wonder if newlib compiles with llvm.
I prefer to avoid getting trapped in sandboxes, learn the tools and how to manipulate the linker, etc to build your binaries.
I am quite new to the embedded linux programming and did not really understand this concept very well.
Can anyone explain the essence of the "host-target" relation? Is this model only specific to the "cross-compilation"? Is it used just because "executable code will be run on another enviroment"? and what matters with the linux kernel on the target? E.g., the "building the embedded linux system" book mentioned this, but did not explain its motivation or goal of this type of development.
Thanks a lot.
The 'motivation' for this model is that seldom is an embedded target a suitable platform for development. It may be resource constrained, have no operating system, have no compiler that will run on the target, have no filesystem for source files, have no keyboard or display, no networking, and may be relatively slow or anything else you might need to develop effectively.
If your embedded system is suited to running Linux, it is possible that not all of the above limitations apply, but almost certainly enough of them to make you want to avoid developng directly on the target. If this were not so, they it hardly qualifies as an embedded system perhaps.
http://www.landley.net/writing/docs/cross-compiling.html
Seems pretty clear. What specific questions do you have?
Linux since its very origin was written in very portable way. It runs on a whole range of machines with very different CPUs, and it is considered the Good Thing to write in a portable way, so that, for example, package maintainer can easily port your program to some embedded ARM or Cygwin, or Amiga, or...
So, yes, the model is "only" specific to cross-compilation, but actually about every compilation on Linux is a (variant of) cross-compilation, just that by default build, host and target are automatically set to the same value, the same as the machine you run on.
Still, even then, you can take a Linux-i386 compiled compiler, sources for it, and "cross-compile" it for Linux-amd64. And the resulting binary will run much faster on a 64bit CPU.
It IS quite essential in embedded programming though. Mostly because you write programs for weak CPUs that are not capable of running a compiler or would run it at a snail pace. So you take a cross-compiler on a fast CPU (say, some multi-core Intel) and cross-compile for the embedded CPU (say, some low-end ARM).
"In different environment" is putting things very mildly. What you're doing when cross-compiling for embedded is working with entirely different instruction set, different memory access modes, different resource access methods and so on and so on. A machine of entirely different construction than the build host. Your build host may be a Windows PC running Cygwin. Your target may be a chip inside a smartphone. The binary will look nothing like the Cygwin .exe files.
As a direct consequence, -everything- must be compiled for the target from scratch. The kernel, the system utilities, the system libraries, all the tools the target must be running. Thing is, if the target is a ticket selling booth, there is really no sense cross-compiling Eclipse, GCC and Gnome for it, then developing in "local" environment, typing your code on a ticket booth keyboard. Instead, you just cross-compile the essentials of the OS, and your specific applications. You keep the development environment on the build machine, and cross-compile everything you need on the embedded device.
[in practice, you get a Linux distro for the target, and just compile whatever you need modified].