MINGW: How to disable 'Treat WChar_t As Built In Type' in g++ linking shared/dynamic library - g++

I built the shared libraries with MinGW and NOT treated WChar_t as a built-in type
#ifdef _NATIVE_WCHAR_T_DEFINED
typedef wchar_t UShort; // Treat as Built In
#else
typedef unsigned short UShort; // NOT treated as Built In
#endif
When linking the shared libraries with a small program,
g++ -o helloworld main.cpp -I../include/.. -L../lib -l.. // By default treating WChar_t as a built-in type and getting the compilation error (undefined reference to `)
I have checked the g++ command-line options here https://man7.org/linux/man-pages/man1/g++.1.html
and also tried the below option
With -fpreprocessed, predefinition of command line and most
builtin macros is disabled. Macros such as "LINE", which are
contextually dependent, are handled normally. This enables
compilation of files previously preprocessed with "-E
-fdirectives-only".
But not succeed, the error remains.

By setting the flag -D_NATIVE_WCHAR_T_DEFINED=OFF works.
g++ -D_NATIVE_WCHAR_T_DEFINED=OFF -o helloworld main.cpp
-I../include/.. -L../lib -l..
Thank you

Related

Command line to build C++ program with LLVM libs

I am starting in the world of LLVM and searched in several places and read several documentation about LLVM but I found nothing showing how to compile a program that uses LLVM headers and libs ....
I wrote this simple program just to try to compile, using the Visual Studio cross-compiler, I tried several command line options .... even using the -lLLVM option, but, nothing worked ...
I tried using g++ and clang++
#include <iostream>
#include <llvm/ADT/OwningPtr.h>
#include <llvm/Support/MemoryBuffer.h>
int main()
{
llvm::OwningPtr<llvm::MemoryBuffer> buffer
return 0;
}
When I try to build, I get this erro:
error : 'llvm/ADT/OwningPtr.h' file not found
So, what is the command line to compile this simple program?
The command llvm-config --cxxflags --ldflags --system-libs --libs core will provide you with all the linkable llvm libraries, provided you have llvm installed. Just link with this command in single quotes

How to compile Cuda within Clang when included by main c++ file?

I am currently working on a project, where I want to execute some code in Cuda, which should be called from the main c++ file. When I am compiling with Clang only the .cpp files are compiled and the compiler tells me "expected exprission" on the <<<>>> Kernel call notation. Any Idea how I can fix this?
I have a .cuh files with the definition which I am including and a .cu source file. I am using CMake to configure the project and building it with Ninja.
I am using ccached clang++ and supplying "--cuda-path=/usr/local/cuda-10.1 --cuda-gpu-arch=sm_61 -L/usr/local/cuda-10.1/lib64 -lcudart_static -ldl -lrt -pthread -std=c++17" to clang args.
When I add the -x cuda flag, the error does not appear, but instead it tells me that a library that I am linking against is not allowed to overwrite some host function, but I think this is because it wants to compile everything as cuda, which is not intended.
I am passing all files inside my source folder to add_executable in CMake via a GLOB ${APP_PATH}/src/*, which should add all files.
main.cpp
#include "ParticleEngine.cuh"
...
int main(){
simulation_timestep(&this->particles[0], this->gravity, 1, delta_frame,
this->particles.size());
}
ParticleEngine.cuh
#pragma once
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
void simulation_timestep(Particle *particles, ci::vec3 gravity, double mass,
double time_delta, unsigned long long n_particles);
ParticleEngine.cu
#include "ParticleEngine.cuh"
__global__ void particle_kernel(Particle *particles, ci::vec3 *gravity,
double *mass, double *time_delta) {
...
}
void simulation_timestep(Particle *particles, ci::vec3 gravity, double mass,
double time_delta, unsigned long long n_particles) {
... //memcpy stuff
particle_kernel<<<dimgrid, dimblock>>>(cuda_particles, cuda_gravity,
cuda_mass, cuda_time_delta);
...
}
edit:
Full error message:
[build] In file included from ../src/main.cpp:1:
[build] ../src/ParticleEngine.cu:43:20: error: expected expression
[build] particle_kernel<<<dimgrid, dimblock>>>(cuda_particles, cuda_gravity,
[build] ^
edit:
Error message when executing clang with -x cuda:
[build] /home/mebenstein/Cinder/include/glm/gtx/io.inl:97:32: error: __host__ __device__ function 'get_facet' cannot overload __host__ function 'get_facet'
[build] GLM_FUNC_QUALIFIER FTy const& get_facet(std::basic_ios<CTy, CTr>& ios)
[build] ^
[build] /home/mebenstein/Cinder/include/glm/gtx/io.hpp:145:14: note: previous declaration is here
[build] FTy const& get_facet(std::basic_ios<CTy,CTr>&);
[build] ^
I am including the c++ library cinder in main.cpp and this error appears.
#include in C++ works by literally replacing that statement with the contents of the included file. As a consequence, the included file is also parsed as C++ code.
To compile a file as CUDA code, the file needs to be a separate compilation unit, i.e. given as an argument to the clang invocation. It also either needs to have a name ending in .cu, or the -x cuda flag needs to be given to clang.
Update after error messages have been included in the question:
It appears that Cinder does not support compilation of the CUDA part with clang++ because of a difference in how __host__/__device__ attributes are treated.
At this point your options are the following:
You can modify Cinder to also support clang++, it's open source.
You can ask the Cinder authors or third parties whether they are willing to make the necessary changes. A cash incentive may or may not increase willingness.
You can use nvcc to compile the code.

How to build both a library and a test executable in the same Eclipse/CDT project?

I have built a shared library under Eclipse/CDT in C++. To manage my projects tests, I would like to have in the same project the library and an executable for running tests on the library.
How can I do that please ?
For the library itself, I have standard build settings : a Debug and a Release target, with the -fPIC compile option, an artifact type Shared Library, extension so and prefix lib, and the -share linker option.
For the test program, I have added in the same project a main.cpp file:
#ifdef TEST_
#include <cstdlib>
#include <iostream>
#include "config.h"
using namespace std;
int main(int argc, char **argv) {
cout << "Test for project utils" << endl;
return 0;
}
#endif /* TEST_ */
I have added a specific Test target copied from the DEBUG one and adapted for standard executable build settings : suppress the -fPIC compile option, add -D TEST_, modify the artifact type to Executable, supress extension so and prefix lib, suppress the -share option for the linker.
Now, just build Debug, Release, and Test as normal, what can be done independently. The Test target could be easily changed for say Test-Debug and Test-Release, to get a library self-test runned just after installation.

Creating DLL from CUDA using nvcc

I want to create a .dll from a CUDA code (kernel.cu) in order to use this library from an external C program. After some attempts I just left a simple C function in .cu file. Code follows:
kernel.cu
#include <stdio.h>
#include "kernel.h"
void hello(const char *s) {
printf("Hello %s\n", s);
}/*
kernel.h
#ifndef KERNEL_H
#define KERNEL_H
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#ifdef __cplusplus
extern "C" {
#endif
void __declspec(dllexport) hello(const char *s);
#ifdef __cplusplus
}
#endif
#endif // KERNEL_H
I tried to first generate a kernel.o object with nvcc and after i used g++ for creating DLL as following:
nvcc -c kernel.cu -o kernel.o
g++ -shared -o kernel.dll kernel.o -L"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\lib\x64" -lcudart
It works fine and generates kernel.dll. To test DLL file I wrote this simple program main.c:
#include <stdio.h>
#ifdef __cplusplus
extern "C" {
#endif
void __declspec ( dllimport ) hello(const char *s);
#ifdef __cplusplus
}
#endif
int main(void) {
hello("World");
return 0;
}
compiled with:
g++ -o app.exe main.c -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -L. -lkernel
Result is a memory access error when execution starts.
Nevertheless, if I rename .cu file in .c (as it is just C code), using the same commands, it does work. nvcc's output changes, as far as I know because it uses default C compiler instead of CUDA one.
What do you think, is it a problem related with nvcc? Or am I making any mistake?
EDIT: I forgot some info which may be important. Warnings appear in the first call to g++ (when dll is created) and they are different depending on whether .cu .c or .cpp.
.cu
Warning: .drectve `/FAILIFMISMATCH:"_MSC_VER=1600" /FAILIFMISMATCH:"_ITERATOR_DEBUG_LEVEL=0"
/DEFAULTLIB:"libcpmt" /DEFAULTLIB:"LIBCMT" /DEFAULTLIB:"OLDNAMES" /EXPORT:hello ' unrecognized
and it doesn't work.
.cpp and .c
Warning: .drectve `/DEFAULTLIB:"LIBCMT" /DEFAULTLIB:"OLDNAMES" /EXPORT:hello ' unrecognized
and it works.
Solved. I still don't know why happened (maybe it is because of not using official compiler like Robert Crovella said), but replacing the two commands for making a DLL by this one works:
nvcc -o kernel.dll --shared kernel.cu
Note the double dash (nvcc works this way), and the fact of making it directly instead of creating first .o and then making DLL from the object.
In visual studio you can also make it compile into a .dll instead of a .obj file by navigating through the options:
DEBUG -> -Project name- Properties -> Configuration properties -> Configuration Type
Change the option from Application(.exe) to Dynamic Library(.dll)
You can find the dll after compiling in the DEBUG folder or RELEASE folder

Linker not taking local (user) boost installation with g++

I want to have local installation (in my home-folder (Linux), say $HOME/boost) of the boost C++ libraries in addition to a system-wide installed default of the boost libs. I built them from sorce and that worked fine.
After that, I set the environment variables CPLUS_INCLUDE_PATH and LD_LIBRARY_PATH to match the destination of the local installation, so both pointing to $HOME/boost/include and $HOME/boost/lib/, respectively.
In order to test that, I used the following code for testing the correct usage of CPLUS_INCLUDE_PATH for the headers:
#include <boost/version.hpp>
#include <iostream>
#include <iomanip>
int main()
{
std::cout << "Boost version: " << BOOST_LIB_VERSION << std::endl;
return 0;
}
Compiling it with g++ -o Test_boost_version test_boost_version.cpp works as expected, reporting the expected (local) version. Having CPLUS_INCLUDE_PATH empty gives me the boost-version of the default, system-wide installation. So far so good.
In order to test the linking, I used the following code (taken from the boost homepage:
#include <boost/regex.hpp>
#include <iostream>
#include <string>
int main()
{
std::string line;
boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" );
while (std::cin)
{
std::getline(std::cin, line);
boost::smatch matches;
if (boost::regex_match(line, matches, pat))
std::cout << matches[2] << std::endl;
}
}
and built it with g++ -o Test_boost_linking test_boost_linking.cpp -lboost_regex.
Calling ldd Test_boost_linking however does NOT make use of the local installation (provided via LD_LIBRARY_PATH) but gives me: libboost_regex.so.1.42.0 => /usr/lib/libboost_regex.so.1.42.0 (0x00007f9264612000)
When I use g++ -o Test_boost_linking test_boost_linking.cpp -lboost_regex -L$HOME/boost/lib, ldd is reporting the correct library (libboost_regex.so.1.50.0 => $HOME/boost/lib/libboost_regex.so.1.50.0 (0x00007f6947d2a000)).
This is actually a problem for me since I want to set up my local environment such that a compilation will ignore the system-default boost installation and only use the local installation and I thought this is exactly what is achieved when setting the CPLUS_INCLUDE_PATH and LD_LIBRARY_PATH, but for the latter, this seems not to hold true.
So how can I make sure that using g++ -o Test_boost_linking test_boost_linking.cpp -lboost_regex (without -L) uses the local libraries?
[EDIT] Thinking of it further, I am wondering IF it is actually absolutely mandatory to use "-L$HOME/boost/lib" in the command-line (using LDFLAGS as environment variable seems to have no effect, probably just in combination with a Makefile) when using libraries in a non-standard directory?? Is this the case?
(BTW I think this will hold true also for other libraries, not only boost...)
(I used: g++ (Debian 4.4.5-8) 4.4.5)
Thank you.
You need to use the environment variable LIBRARY_PATH to let gcc know where to find the libraries at link time. LD_LIBRARY_PATH lets the program know where to find the dynamic libraries at runtime. This answer has more details. These links from "An Introduction to GCC" may also be useful: Compilation options:Environment Variables and Shared and Static Libraries