I have wrote a small application in C on VxWorks 6.9 evaluation. It uses sample code and makefile from windriver/target/usr/apps/samples. I would like to compile it for other platforms beside Intel.
I have downloaded DIAB compiler evaluation which comes with an impressive list of targets, however I have issues compiling my app, especially because it's missing make rules from target/usr/make etc.
What are my options to compile a RTP/VXE without VxWorks?
Or should I install DIAB evaluation on top of VxWorks Evaluation?
Thanks!
Related
I'm trying to compile a very simple Tensorflow program (which only prints the Tensorflow version) with my company's c compiler but the libtensorflow.so I downloaded from Tensorflow's offical website is incompatible with our c compiler.
My company's c compiler is pretty much just a standard gcc but gcc can compile the program and our custom compiler cannot.
My colleague told me I have two options: (1) replace Bazel's compiler with our compiler and use Bazel to compile the program or (2) Compile the program with Bazel first then compile the program using our compiler and include the pb.h files generated by Bazel (because those bazel files can only be generated by Bazel).
I'm not sure how to do (!) but I tried (2). The problem with (2) is I got erros saying the protoc was generated by an older version and I'm not sure how to change to the right version.
Some additional information: (1) The OS is Linux, (2) I do not have the privilege to use sudo commands, (3) I cannot access system directories (e.g. /usr/local)
Is there any hope I can make this work? You may ask why not just build the program with Bazel. It's because our company's program needs to be run by our company's simulator and the simulator only accepts program generated by our company's compiler.
Your only option is to build tensorflow with Bazel, and tell Bazel to use your C/C++ compiler. The easiest way is to set the CC and CXX environment variables to point to your compiler's executable. If it is really a drop-in replacement of GCC, then it should work and after building you should get a tensorflow binary compiled with your custom compiler.
If special flags are needed then you should make a custom toolchain in Bazel to tell it how to use your compiler, it is a bit complex but not much. Instructions for that are at https://github.com/bazelbuild/bazel/wiki/Building-with-a-custom-toolchain
I'm doing some experiments with an evaluation version of the WindRiver dcc diab compiler. I would like to do some testing on my Windows PC.
However I think I have the wrong target setup.
I've got as far as using the 'dctrl -t' command to get the list of target architectures, but selecting options so far hasn't produced anything i can run on windows.
I'm simply doing:
dcc main.c -o main.exe
Am I missing a step?
do I have the wrong target?
or is it simply not possible to create windows binaries?
I believe that the Diab compiler targets a free-standing environment, so would not produce a Windows executable. Moreover x86 is not a supported target processor in any case; see the product brief.
The compiler is intended for use with VxWorks, though can be separately licensed. The toolchain includes an instruction-set simulator for executing target code in a simulated environment, and if you are using VxWorks, that includes a VxWorks simulator.
If you want to build your code as a native Windows application; you will have to use a Windows targeted compiler. I suggest MinGW/GCC since WindRiver support both their own WindRiver/Diab compiler and GCC for Vxworks development, and they share a great deal of commonality with respect to compiler switches and extension syntax.
I have been researching Golang and I see that it has a compiler.
But is it compiling Go into assembly level code or just converting it into BYTECODES and then calling that compilation? I mean, even in PHP we are able to convert it into BYTECODES and have faster performance.
Is Golang a REPLACEMENT for system level programming and compiling ?
This is really a compiler (in fact it embbeds 2 compilers) and it makes totally self sufficient executables. You don't need any supplementary library or any kind of runtime to execute it on your server. You just have to have it compiled for your target computer architecture.
From the documentation :
There are two official Go compiler tool chains. This document focuses
on the gc Go compiler and tools (6g, 8g etc.). For information on how
to work on gccgo, a more traditional compiler using the GCC back end,
see Setting up and using gccgo.
The Go compilers support three instruction sets. There are important
differences in the quality of the compilers for the different
architectures.
amd64 (a.k.a. x86-64); 6g,6l,6c,6a
A mature implementation. The
compiler has an effective optimizer (registerizer) and generates good
code (although gccgo can do noticeably better sometimes).
386 (a.k.a. x86 or x86-32); 8g,8l,8c,8a
Comparable to the amd64 port.
arm (a.k.a. ARM); 5g,5l,5c,5a
Supports only Linux binaries. Less widely used than
the other ports and therefore not as thoroughly tested.
Except for
things like low-level operating system interface code, the run-time
support is the same in all ports and includes a mark-and-sweep garbage
collector, efficient array and string slicing, and support for
efficient goroutines, such as stacks that grow and shrink on demand.
The compilers can target the FreeBSD, Linux, NetBSD, OpenBSD, OS X
(Darwin), and Windows operating systems. The full set of supported
combinations is listed in the discussion of environment variables
below.
On a server you'll usually target the amd64 platform.
Note that Go is well known for the speed of compilation. When deploying my server programs, I don't build for the different platforms on the development computer : I deploy the sources and I compile directly on the production servers. Since Go1 I never had a code compiling on one platform and not compiling on the other ones.
On Windows I had no problem in making an exe on my development computer and simply sending this exe to people never having installed anything Go related.
Go compiles quickly to machine code yet has the convenience of garbage collection and the power of run-time reflection. It's a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language.
Source - golang.org
Golang is a compiler-based language, it can easily be compiled on the development computer for any targeted system such as linux and mac.
A golang project when have compiled turns to a self-sufficient executable and can be ran on the targeted system without anything additional. It's because the golang compiler turns your code into bytes ready to execute on a system which can run compiled c code.
Mono has a LLVM compiler. Is there anyway to use it with Emscripten (compile C# code to JavaScript)?
There is currently no out of the box way to do this. It might be possible, but it would require a lot of work. You would need to run mono in full AOT (ahead of time) compilation mode with the LLVM codegen. But there are many issues:
LLVM is currently not used for all methods, and mono falls back to it's own code generator in a number of cases. You would either need to get the LLVM suport working for all cases, or provide the JS code needed when LLVM cannot be used.
Mono currently has a number of architecture specific files (x86, amd64, arm, etc) and would probably need equivalent for JS, both for the code generation and for the AOT runtime.
And so on...
you can try to use C# Native
have a look here http://csnative.codeplex.com
Even if you run mono in full AOT and compile your program with LLVM it is not possible to use it with emscripten. This is quote from my discussion in mono group:
Besides that, no, it's not possible to use emscripten with mono's llvm output for a lot of reasons.
The output embeds calls to the mono runtime and some trampilines.
Mono use a custom LLVM with custom IR operations and that won't work on emscripten without some work on their end.
I am a newbie to embedded developement, as figure shown. I have a small ARM board, AT91SAM7-EX256. I have also a JTAG programmer dongle, too. I am using Linux (Ubuntu x86_32) on my notebook and desktop machine. I'm using CodeSourcery Lite for cross-compiling to ARM-Linux.
Am I right that I can't use this Linux-target cross-compiler to make binary or hex files for the small ARM board (it comes without any operating system)? Should I use the version called ARM EABI instead?
As I see, it's a "generic" ARM compiler. I've read some docs, and there're lot of options to specify the processor type and instruction set (thumb, etc.), there will be no problem with it. But how can I tell the compiler, how should the image (bin/hex) looks like for the specific board (startup, code/data blocks etc.)? (In assemblers, there're the org and load directives for it.)
What software do I need to capture some debug messages from the board on my PC? I don't want to on-board debugging, I just need some detailed run-time signal, more than just blinking leds.
I have an option to use MS-Windows, I can get a dedicated machine for it. Do you recommend it, is it much easier?
Can I use inline assembly somehow in my C code? I dunno anything about that. Can I use C++ or just C?
I have also a question, which don't need to answer: are there really 4096 kind of GNU compilers and cross-compilers (from Linux_x86_32 -> Linux_x86_32, Linux_x86_32 -> Linux_ARM, OSX -> Linux_ARM, PPC_Linux -> OSX) and 16 different GNU compiler sources (as many target platforms/processors exists) around? The signs says "yes", but I can't believe it. Correct me, and show me the GNU compiler which can produce object file for any platform/processor, and the universal linker which can produce executable for any platform.
While Windows is not a "better" platform do this kind of embedded development on, it may be easier to start with since you can get a pre-built environment to work with. For example, Yagarto (which I would recommend).
Setting up an embedded development environment on Linux can require a considerable amount of knowledge, but it's not impossible.
To answer your questions:
Your Linux cross-compiler comes with libraries to build executables for a Linux environment. You have hinted that you want to build a bare-metal executable for this board. While you can do this with your compiler, it will just confuse things. I recommend building a baremetal cross-compiler. Since you're building your own baremetal executable (and thus you are the operating system, the ABI doesn't matter since you're generating all of the code and not interoperating with other previously built code.
There are several versions of the ARM instruction set (and Thumb). You need to generate code for your particular processor. If you generate the code for a newer version of the instruction set, you will likely generate code which generates a reserved instruction exception. Most prebuilt gcc cross-compiler toolchains for ARM are "multilib" and will build for a variety of architectures in both ARM and Thumb.
Not sure exactly what you're looking for here. This is a bare metal platform. You can use the debugger channel to send messages if you're debugging on target, or you'll need to build your own communication channel into the firmware you write (i.e. uart support).
See above.
Yes. See here for details on gcc's extended inline assembly syntax. You can do this in C++ and C. You can also simply link pure assembly files.
There is no universal gcc compiler / linker. You need a uniquely built compiler for each host / target combination you use.
Finally, please take a look at Atmel's documentation. They have a wealth of information on developing for this target as well as a board package with the needed linker directives and example programs. Note of course the package is for Atmel's own eval board, but it will get you started.
http://sam7stuff.blogspot.com/
I use either of the codesourcery lite versions. But I have no use for the gcc library nor a C library, I just need a compiler.
In the gcc 3 days newlib was great, modify two files worth of system support (simple open, close, read, putc type stuff) and you could compile just about anything, but with gcc 4.x you cannot even go back and cross compile gcc 3.x, you have to install an old linux distro in a virtual machine.
To get the gcc library yes you probably want to use the eabi version not the version with linux gnueabi in the file names.
You might also consider llvm (if you dont need a C library, and you will still need binutils), hmm, I wonder if newlib compiles with llvm.
I prefer to avoid getting trapped in sandboxes, learn the tools and how to manipulate the linker, etc to build your binaries.