I'm trying to compile some software I've been writing in Linux that uses some fancy new C++0x features on my Mac. I used MacPorts to install the gcc45 package, which gave me /opt/local/bin/g++-mp-4.5, however this compiler doesn't want to compile anything in <thread>. Eg I try to compile:
//test.cpp
#include <thread>
int main()
{
std::thread x;
return 0;
}
and get:
bash-3.2$ /opt/local/bin/g++-mp-4.5 -std=c++0x test.cpp
test.cpp: In function 'int main()':
test.cpp:5:2: error: 'thread' is not a member of 'std'
test.cpp:5:14: error: expected ';' before 'x'
A quick look in /opt/local/include/gcc45/c++/thread shows that the std::thread class is defined, but is guarded by the following:
#if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
Again, this works perfectly on my Ubuntu machine, so what's the proper way to enable the c++0x <thread> library under the MacPorts version of g++ 4.5 (g++-mp-4.5)? Failing that, is there anything I need to know (configure flags, etc.) before I go about compiling gcc 4.5 myself?
Update:
It doesn't look like the SO community knows much about this, so maybe it's time to go a little closer to the developers. Does anyone know of an official mailing list I could forward this question to? Are there any etiquette tips to help me get an answer?
Update 2:
I asked SO for another temporary solution here, and so I'm now just substituting the boost::thread libraries for the std ones. Unfortunately, there is no boost::future so this isn't quite a full solution yet. Any help would still be greatly appreciated.
Actually <thread> library doesn't work under Mac OS X because pthreads here don't have some functions with timeouts (e.g. pthread_mutex_timedlock()). Availability of this functions have to be checked using _POSIX_TIMEOUTS macro but it's defined to -1 in Mac OS X 10.4, 10.5 and 10.6 (I don't know what's about 10.7) and this functions are really absent in pthread.h.
The _POSIX_TIMEOUTS macro is checked during the configuration of libstdc++. If the check ends successfully _GLIBCXX_HAS_GTHREADS macro becomes defined. And <thread> contents become available with -std=c++0x.
libstdc++ really needs _POSIX_TIMEOUTS e.g. in std::timed_mutex class implementation (see <mutex> header).
To summarize, I think that <thread> would become available on Mac OS X when GCC's gthreads or libstdc++ will implement pthread_mutex_timedlock() (and others) emulation or when this functions will be implemented in Mac OS X.
Or maybe there would be a way in the future C++ standard to query for language features (e.g. this timed functions and classes) and it will be possible to build libstdc++ with this features disabled. However I'm not very familiar with the future standard and have doubts about that feature.
Update - gcc4.7 now allows compilation of on OS X: See here
Related
I'm porting my Windows Mono application to Linux, one step at a time, first to WSL Linux under Windows 10, then hopefully to Real Ubuntu. Everything described here behaves identically under both WSL and Ubuntu 20.04.
My application loads the /usr/lib/libmono-2.0.so shared library, but in doing so the loader throws an exception: undefined symbol: _ZTIPi. Some research showed that this symbol was defined in libstdc++.so, so I "pre-loaded" that as well. That had no effect.
Meanwhile, I found this code fragment perusing the mono code base. It would be in mono\mini\mini-llvm.c: (https://github.com/mono/mono/blob/main/mono/mini/mini-llvm.c)
/* Add a reference to the c++ exception we throw/catch */
{
LLVMTypeRef exc = LLVMPointerType (LLVMInt8Type (), 0);
module->sentinel_exception = LLVMAddGlobal (module->lmodule, exc, "_ZTIPi");
LLVMSetLinkage (module->sentinel_exception, LLVMExternalLinkage);
mono_llvm_set_is_constant (module->sentinel_exception);
}
What exactly is happening here? What do I need to know and understand to successfully load this library?
Solution so far was to build and install Mono outside of the default release.
See https://www.mono-project.com/docs/compiling-mono/linux/
then One Stop Shop Build Script (Debian)
If you follow the instructions, your alternate Mono installation would be in /usr/local
This would suggest that the Mono release was somehow incorrectly compiled.
So, having found this after having the same issue, thank you for the information in the question and answer, it helped me figure out how to solve it. However, your solution doesn't answer your "What exactly is happening here?", which I shall try to explain.
Let's start with the _ZTIPi symbol, which when run through c++filt comes out as: typeinfo for int*
Which is indeed a part of the C++ standard library. It is listed as undefined in the default install of libmono, but not listed at all in the one I compiled from source.
You can check this with something like:
$ readelf --all /usr/lib/libmono-2.0.so | grep _ZTIPi
However, libstdc++.so isn't listed as needed library in either case.
You can check this with something like:
$ readelf --all /usr/lib/libmono-2.0.so | grep NEEDED
This all means that when you try to load in libmono the dynamic linker will try to resolve the _ZTIPi symbol by looking in the libraries listed by libmono (where it can't find it), and in the global symbol table for the process, which may or may not have it.
If you link against libmono at compile-time the solution would be to also link against libstdc++ at that time. This way, your executable will list libstdc++ as needed library, causing the dynamic linker to load its symbols into the global symbol table. When loading libmono the dynamic linker can then find the symbol fine.
If you load libmono dynamically at runtime, you'll have to load in libstdc++ first, using something like this:
void * cxxlib = dlopen("libstdc++.so.6", RTLD_LAZY | RTLD_GLOBAL);
Key thing here is the RTLD_GLOBAL flag, which will add all symbols to the global table, so they can be found when you're trying to load libmono.
And alternative is of course to build mono yourself, which should end you up with a libmono that isn't subtly broken. In that case, make sure to throw in a call to mono_set_dirs, like this:
mono_set_dirs("/opt/mono/lib", "/opt/mono/etc");
I trying to write some game, based on Love2d framework, compiled from moonscript. Every time when I make a mistake in my code, my application throws error and this error refers to compiled lua-code, but not a moonscript, so I have no idea where exactly this error happens. Tell me please, what a solution in this situation? Thanks.
Moonscript does support source-mapping/error-rewriting, but it is only supported when running in the moon interpreter: https://moonscript.org/reference/command_line.html#error_rewriting
I think it could be enabled in another lua environment but I am not completely sure what would be involved.
It would definetely require moonscript to hold on to the source-map tables that are created during compilation, so you couldn't use moonc; instead use the moonscript module to just-in-time compile require'd modules:
main.lua
-- attempt to require moonscript,
-- for development
pcall(require, 'moonscript')
-- load the main file
require 'init'
init.moon
love.draw = ->
print "test"
with this code and moonscript properly installed you can just run the project using love . as normal. The require 'moonscript' call will change require to compile moonscript modules on-the-fly. The performance penalty is negligible and after all modules have been loaded there is no difference.
Debugging is a problem for pretty much any source-to-source compilation system. The target language has no idea that the original language exists, so it can only talk about things in terms of the target language. The more divergent the target and original languages are, the more difficult debugging will be.
This is a big part of the reason why C++ compilers don't compile to C anymore.
The only real way to deal with this is to become intimately familiar with how the Moonscript compiler generates Lua from your Moonscript code. Learn Lua and carefully read the output Lua, comparing it to the given Moonscript. That will make it easier for you to map the given Lua error and source code to the actual Moonscript code that created it.
Has anyone successfully ported googleTest to a real time process in WindRiver 3.0 / VxWorks 6.6 ?
I am able to get gtest to build, but I get a few errors when linking. I can modify these specific sections of code, but that only produces run time errors.
here is what I'm seeing:
googleTest.so: undefined reference to isascii(int)'
googleTest.so: undefined reference togettimeofday'
googleTest.so: undefined reference to `strcasecmp'
I have 2 shared Libraries (.so): 1 for gtest and 1 for gtest_main. I have 1 RTP (real time process) where I have my test code.
Note:
To get googletest to compile in vxworks, I had to modify some of the Flags: Specifically:
GTEST_HAS_POSIX_RE - 0
GTEST_HAS_TR1_TUPLE - 0
GTEST_HAS_STREAM_REDIRECTION 0
Any insight or advice is much appreciated.
Turns out the way the Kernel was configured was incorrect.
To remedy the problem, I actually made a brand new kernel; keeping all of the default settings. This worked.
VxWorks is not yet supported by Google Test.
Also note that there may need to be certain changes made to the code to support the platform. For example, getClockTime may not exist and the code have to be altered to use a user defined method.
I think there is a unique solution based on your platform, target and sim; and your development environment. As well as versions of the tool (vxworks / wind-river etc).
Googletest does seem to be supported on vxworks7
https://github.com/Wind-River/vxworks7-google-test/blob/master/README.md
I have compiled it, and built a DKM, but at the moment it does not seem to be supporting any tests, so I am not sure what is going on there
I have a C project which was previously being built with Codesourcery's gnu tool chain. Recently it was converted to use Realview's armcc compiler but the performance that we are getting with Realview tools is very poor compared to when it is compiled with gnu tools. Shouldnt it be opposite case i.e it should give better performance when compiled with Realview's tools? What am I missing here. How can I improve the performance with Realview's tools?
Also I have noticed that if I run the binary produced by Realview Tools with Lauterbach it crashes but If I run it using Realview ICE it runs fine.
UPDATE 1
Realview Command line:
armcc -c --diag_style=ide
--depend_format=unix_escaped --no_depend_system_headers --no_unaligned_access --c99 --arm_only --debug --gnu --cpu=ARM1136J-S --fpu=SoftVFP --apcs=/nointerwork -O3 -Otime
GNU GCC command line:
arm-none-eabi-gcc -mcpu=arm1136jf-s
-mlittle-endian -msoft-float -O3 -Wall
I am using Realview Tools version 4.1 and GCC version 4.4.1
UPDATE 2
Lauterbach issue has been solved. It was being caused because of Semihosting as the semihosting SWI was not being handled in Lauterbach environment. Retargeting the C library to avoid Semihosting did the trick and now my program runs successfully with Lauterbach as well as Realview ICE. But the performance issue is as it is.
Since you have optimisations on, and in some environments it crashes, it may be that your code uses undefined behaviour or other latent error. Such behaviour can change with optimisation, or even break altogether.
I suggest that you try both tool-chains without optimisation, and make sure that the warning level is set high, and you fix them all. GCC is far better that armcc at error checking so is a reasonable static analysis check. If the code builds clean it is more likely to work and may be easier for the optimiser to handle.
Have you tried removing the '--no_unaligned_access'? ARM11s can typically do unaligned access (if enabled in the startup code) and forcing the compiler/library to not do them may be slowing down your code.
The current version of RVCT says of '--fpu=SoftVFP':
In previous releases of RVCT, if you
specified --fpu=softvfp and a CPU with
implicit VFP hardware, the linker
chose a library that implemented the
software floating-point calls using
VFP instructions. This is no longer
the case. If you require this legacy
behavior, use --fpu=softvfp+vfp.
This suggests to me that if you perhaps have an old version of RVCT the behaviour will be to use software floating point regardless of the presence of hardware floating point. While in the GNU version -msoft-float will use hardware floating point instructions when an FPU is available.
So what version of RVCT are you using?
Either way I suggest that you remove the --fpu option since the compiler will make an implicit appropriate selection based on the --cpu option selected. You also need to correct the CPU selection, your RVCT option says --cpu=ARM1136J-S not ARM1136FJ-S as you told GCC. This will no doubt prevent the compiler from generating VFP instructions, since you told it it has no VFP.
The same source code can produce dramatically different binaries due to factors like. Different compilers (llvm vs gcc, gcc 4 vs gcc3, etc). Different versions of the same compiler. Different compiler options if the same compiler. Optimization (on either compiler). Compiled for release or debug (or whatever terms you want to use, the binaries are quite different). When going embedded, you add in the complication of a bootloader or rom monitor (debugger) and things like that. Then add to that the host side tools that talk to the rom monitor or compiled in debugger. Despite being a far better compiler than gcc, arm compilers were infected with the assumption that the binaries would always be run on top of their rom monitor. I want to remember that by the time rvct became their primary compiler that assumption was on its way out, but I have not really used their tools since then.
The bottom line is there are a handful of major factors that can affect the differences between binaries that can and will lead to a different experience. Assuming that you will get the same performance or results, is a bad assumption, the expectation is that the results will differ. Likewise, within the same environment, you should be able to create binaries that give dramatically different performance results. All from the same source code.
Do you have compiler optimizations turned on in your CodeSourcery build, but not in the Realview build?
I'm trying to do ioctl command through Mono framework, but I cant find what I'm looking for.
I'm trying to send command to a DVB card that has a kernel module. I hope someone can link or explain clearly how this can be done. Any example with Mono using kernel modules would be useful I guess.
Mono does not contain a wrapper for ioctl in Mono.Unix, because ioctl call parameters vary greatly and such a wrapper would be almost useless. You should declare a DllImport for each ioctl you need.
You probably don't need a helper library written in C, however, you may need it during development to extract actual values hidden behind different C preprocessor macros. For example, to expand C header:
#define FE_GET_INFO _IOR('o', 61, struct dvb_frontend_info)
compile and execute this helper:
#include <linux/dvb/frontend.h>
#include <stdio.h>
int main()
{
printf("const int FE_GET_INFO = %d;\n", FE_GET_INFO);
return 0;
}
A short mono mailing list discussion on the topic.
ioctl isn't supported by Mono AFAIK. Too OS-specific and parameter list depends on actual request. You could try DLLImport
Interop with Native Libraries
You should write a wrapper library for your exact calls. Look at how Mono.Unix wraps syscalls (google codesearch for Mono.Unix Syscall.cs) to get the idea. Then create a wrapper for each specific ioctl command, which uses your own representation of the data.
As jitter said - you'll need to DLLImport the ioctl itself.
Check for my similar question, and later question on the subject. In this case I'm trying to wrap the Videl4Linux interface, that could be of interest for you.
I really suggest those readings.