How do I fix an encog "kernel launch failure" error when running: "./encog benchmark /gpu:1" - encog

As part of encog install testing, I tried running
./encog benchmark /gpu:0, which worked fine, but when I tried
./encog benchmark /gpu:1, I got:
encog-core/cuda_eval.cu(286) : getLastCudaError() CUDA error : kernel launch failure : (13) invalid device symbol.
I am on Ubuntu 11.10, I got source code from https://github.com/encog/encog-c,
and the "make ARCH=64 CUDA=1" went without error.
Thanks for any help in solving this problem.
Here's the console list for the benchmark that worked fine:
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$ ./encog benchmark /gpu:0
* * Encog C/C++ (64 bit, CUDA) Command Line v1.0 * *
Copyright 2012 by Heaton Research, Released under the Apache License
Build Date: May 4 2013 07:24:00
Processor/Core Count: 32
Basic Data Type: double (64 bits)
GPU: disabled
Input Count: 10
Ideal Count: 1
Records: 10000
Iterations: 100
Performing benchmark...please wait
Benchmark time(seconds): 3.2856
Benchmark time includes only training time.
Encog Finished. Run time 00:00:03.2904
=============================================
Here's the benchmark run that had the problem
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$ ./encog benchmark /gpu:1
* * Encog C/C++ (64 bit, CUDA) Command Line v1.0 * *
Copyright 2012 by Heaton Research, Released under the Apache License
Build Date: May 4 2013 07:24:00
Processor/Core Count: 32
Basic Data Type: double (64 bits)
GPU: enabled
Input Count: 10
Ideal Count: 1
Records: 10000
Iterations: 100
Performing benchmark...please wait
encog-core/cuda_eval.cu(286) : getLastCudaError() CUDA error : kernel launch failure : (13) invalid device symbol.
==========================================
Here's what my GPU environment looks like:
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$ ./encog cuda
* * Encog C/C++ (64 bit, CUDA) Command Line v1.0 * *
Copyright 2012 by Heaton Research, Released under the Apache License
Build Date: May 4 2013 07:24:00
Processor/Core Count: 32
Basic Data Type: double (64 bits)
GPU: enabled
Device 0: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 1: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 2: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 3: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 4: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 5: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 6: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 7: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Performing CUDA test.
Vector Addition
CUDA Vector Add Test was successful.
Encog Finished. Run time 00:00:10.9206
===============================
Here's the output of my "make":
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$ make ARCH=64 CUDA=1
mkdir -p ./obj-cmd
gcc -c -o obj-cmd/encog-cmd.o encog-cmd/encog-cmd.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-cmd
gcc -c -o obj-cmd/cuda_test.o encog-cmd/cuda_test.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-cmd
gcc -c -o obj-cmd/node_unix.o encog-cmd/node_unix.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-cmd
/usr/local/cuda/bin/nvcc -o obj-cmd/cuda_vecadd.cu.o -c encog-cmd/cuda_vecadd.cu -I./encog-core/ -m64
mkdir -p ./obj-lib
gcc -c -o obj-lib/activation.o encog-core/activation.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/errorcalc.o encog-core/errorcalc.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/network_io.o encog-core/network_io.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/util.o encog-core/util.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/util_str.o encog-core/util_str.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/data.o encog-core/data.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/errors.o encog-core/errors.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/network.o encog-core/network.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/pso.o encog-core/pso.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/util_file.o encog-core/util_file.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/vector.o encog-core/vector.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/encog.o encog-core/encog.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/nm.o encog-core/nm.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/object.o encog-core/object.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/rprop.o encog-core/rprop.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/hash.o encog-core/hash.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/train.o encog-core/train.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
/usr/local/cuda/bin/nvcc -o obj-lib/encog_cuda.cu.o -c encog-core/encog_cuda.cu -I./encog-core/ -m64
mkdir -p ./obj-lib
/usr/local/cuda/bin/nvcc -o obj-lib/cuda_eval.cu.o -c encog-core/cuda_eval.cu -I./encog-core/ -m64
ptxas /tmp/tmpxft_00001b04_00000000-5_cuda_eval.ptx, line 141; warning : Double is not supported. Demoting to float
mkdir -p ./lib
ar rcs ./lib/encog.a ./obj-lib/activation.o ./obj-lib/errorcalc.o ./obj-lib/network_io.o ./obj-lib/util.o ./obj-lib/util_str.o ./obj-lib/data.o ./obj-lib/errors.o ./obj-lib/network.o ./obj-lib/pso.o ./obj-lib/util_file.o ./obj-lib/vector.o ./obj-lib/encog.o ./obj-lib/nm.o ./obj-lib/object.o ./obj-lib/rprop.o ./obj-lib/hash.o ./obj-lib/train.o ./obj-lib/encog_cuda.cu.o ./obj-lib/cuda_eval.cu.o
gcc -o encog obj-cmd/encog-cmd.o obj-cmd/cuda_test.o obj-cmd/node_unix.o obj-cmd/cuda_vecadd.cu.o lib/encog.a -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include -lm ./lib/encog.a -L/usr/local/cuda/lib64 -lcudart
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$

I tried running on this on my GeForce 580, and no problem. I am on a different platform than you as you are 6 series. I looked up the error on a few places in Google. It looks like perhaps an issue with the way local memory is used, might not work with the 6 series. Might want to submit an issue here:
https://github.com/encog/encog-c/issues

Related

Unable to compile with Rcpp

I use R version 4.0.2 (2020-06-22) -- "Taking Off Again". My system is W10 enterprise 1909.
I assume I have correctly installed Rtools40, since:
install.packages("jsonlite", type = "source") terminates correctly
I used some code from devtools: has_rtools() and it returns TRUE.
install.packages("jsonlite", type = "source")
essai de l'URL 'https://cran.rstudio.com/src/contrib/jsonlite_1.6.1.tar.gz'
Content type 'application/x-gzip' length 1057910 bytes (1.0 MB)
downloaded 1.0 MB
* installing *source* package 'jsonlite' ...
** package 'jsonlite' correctement décompressé et sommes MD5 vérifiées
** using staged installation
** libs
*** arch - i386
C:/Users/toto26/DOCUME~1/R/R-40~1.2/etc/i386/Makeconf:244: warning: overriding recipe for target '.m.o'
C:/Users/toto26/DOCUME~1/R/R-40~1.2/etc/i386/Makeconf:237: warning: ignoring old recipe for target '.m.o'
"C:/rtools40//mingw32/bin/"gcc -I"C:/Users/toto26/DOCUME~1/R/R-40~1.2/include" -DNDEBUG -Iyajl/api -D__USE_MINGW_ANSI_STDIO -O2 -Wall -std=gnu99 -mfpmath=sse -msse2 -mstackrealign -c base64.c -o base64.o
[...]
"C:/rtools40//mingw32/bin/"ar rcs yajl/libstatyajl.a yajl/yajl.o yajl/yajl_alloc.o yajl/yajl_buf.o yajl/yajl_encode.o yajl/yajl_gen.o yajl/yajl_lex.o yajl/yajl_parser.o yajl/yajl_tree.o
C:/rtools40//mingw32/bin/gcc -shared -s -static-libgcc -o jsonlite.dll tmp.def base64.o collapse_array.o collapse_object.o collapse_pretty.o escape_chars.o integer64_to_na.o is_datelist.o is_recordlist.o is_scalarlist.o modp_numtoa.o null_to_na.o num_to_char.o parse.o prettify.o push_parser.o r-base64.o register.o row_collapse.o transpose_list.o validate.o -Lyajl -lstatyajl -LC:/Users/toto26/DOCUME~1/R/R-40~1.2/bin/i386 -lR
installing to C:/Users/toto26/Documents/R/R-4.0.2/library/00LOCK-jsonlite/00new/jsonlite/libs/i386
*** arch - x64
C:/Users/toto26/DOCUME~1/R/R-40~1.2/etc/x64/Makeconf:244: warning: overriding recipe for target '.m.o'
C:/Users/toto26/DOCUME~1/R/R-40~1.2/etc/x64/Makeconf:237: warning: ignoring old recipe for target '.m.o'
"C:/rtools40//mingw64/bin/"gcc -I"C:/Users/toto26/DOCUME~1/R/R-40~1.2/include" -DNDEBUG -Iyajl/api -D__USE_MINGW_ANSI_STDIO -O2 -Wall -std=gnu99 -mfpmath=sse -msse2 -mstackrealign -c base64.c -o base64.o
[...]
"C:/rtools40//mingw64/bin/"ar rcs yajl/libstatyajl.a yajl/yajl.o yajl/yajl_alloc.o yajl/yajl_buf.o yajl/yajl_encode.o yajl/yajl_gen.o yajl/yajl_lex.o yajl/yajl_parser.o yajl/yajl_tree.o
C:/rtools40//mingw64/bin/gcc -shared -s -static-libgcc -o jsonlite.dll tmp.def base64.o collapse_array.o collapse_object.o collapse_pretty.o escape_chars.o integer64_to_na.o is_datelist.o is_recordlist.o is_scalarlist.o modp_numtoa.o null_to_na.o num_to_char.o parse.o prettify.o push_parser.o r-base64.o register.o row_collapse.o transpose_list.o validate.o -Lyajl -lstatyajl -LC:/Users/toto26/DOCUME~1/R/R-40~1.2/bin/x64 -lR
installing to C:/Users/toto26/Documents/R/R-4.0.2/library/00LOCK-jsonlite/00new/jsonlite/libs/x64
** R
** inst
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
converting help for package 'jsonlite'
finding HTML links ... fini
base64 html
flatten html
fromJSON html
prettify html
rbind_pages html
read_json html
serializeJSON html
stream_in html
unbox html
validate html
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
*** arch - i386
*** arch - x64
** testing if installed package can be loaded from final location
*** arch - i386
*** arch - x64
** testing if installed package keeps a record of temporary installation path
* DONE (jsonlite)
The downloaded source packages are in
‘C:\Users\toto26\AppData\Local\Temp\RtmpQrssTP\downloaded_packages’
One of my issue may come from the location of make:
Sys.which("make")
make
"C:\\WINDOWS\\SYSTEM32\\make.exe"
instead of the expected
Sys.which("make")
## "C:\\rtools40\\usr\\bin\\make.exe"
I have asked my IT department to remove the system32 version from my laptop.
Nevertheless, I am not sure it is the root cause of my issues:
install.packages("Rcpp", type = 'source')
essai de l'URL 'https://cran.rstudio.com/src/contrib/Rcpp_1.0.4.6.tar.gz'
Content type 'application/x-gzip' length 2751467 bytes (2.6 MB)
downloaded 2.6 MB
* installing *source* package 'Rcpp' ...
** package 'Rcpp' correctement décompressé et sommes MD5 vérifiées
** using staged installation
** libs
*** arch - i386
C:/Users/toto26/DOCUME~1/R/R-40~1.2/etc/i386/Makeconf:244: warning: overriding recipe for target '.m.o'
C:/Users/toto26/DOCUME~1/R/R-40~1.2/etc/i386/Makeconf:237: warning: ignoring old recipe for target '.m.o'
"C:/rtools40//mingw32/bin/"g++ -std=gnu++11 -I"C:/Users/toto26/DOCUME~1/R/R-40~1.2/include" -DNDEBUG -I../inst/include/ -O2 -Wall -mfpmath=sse -msse2 -mstackrealign -c api.cpp -o api.o
"C:/rtools40//mingw32/bin/"g++ -std=gnu++11 -I"C:/Users/toto26/DOCUME~1/R/R-40~1.2/include" -DNDEBUG -I../inst/include/ -O2 -Wall -mfpmath=sse -msse2 -mstackrealign -c attributes.cpp -o attributes.o
"C:/rtools40//mingw32/bin/"g++ -std=gnu++11 -I"C:/Users/toto26/DOCUME~1/R/R-40~1.2/include" -DNDEBUG -I../inst/include/ -O2 -Wall -mfpmath=sse -msse2 -mstackrealign -c barrier.cpp -o barrier.o
"C:/rtools40//mingw32/bin/"g++ -std=gnu++11 -I"C:/Users/toto26/DOCUME~1/R/R-40~1.2/include" -DNDEBUG -I../inst/include/ -O2 -Wall -mfpmath=sse -msse2 -mstackrealign -c date.cpp -o date.o
"C:/rtools40//mingw32/bin/"g++ -std=gnu++11 -I"C:/Users/toto26/DOCUME~1/R/R-40~1.2/include" -DNDEBUG -I../inst/include/ -O2 -Wall -mfpmath=sse -msse2 -mstackrealign -c module.cpp -o module.o
"C:/rtools40//mingw32/bin/"g++ -std=gnu++11 -I"C:/Users/toto26/DOCUME~1/R/R-40~1.2/include" -DNDEBUG -I../inst/include/ -O2 -Wall -mfpmath=sse -msse2 -mstackrealign -c rcpp_init.cpp -o rcpp_init.o
"C:/rtools40//mingw32/bin/"g++ -std=gnu++11 -shared -s -static-libgcc -o Rcpp.dll tmp.def api.o attributes.o barrier.o date.o module.o rcpp_init.o -LC:/Users/toto26/DOCUME~1/R/R-40~1.2/bin/i386 -lR
/usr/bin/sh: line 8: "C:/rtools40//mingw32/bin/"g++ -std=gnu++11 : No such file or directory
aucune DLL n'a pas été créé
ERROR: compilation failed for package 'Rcpp'
* removing 'C:/Users/toto26/Documents/R/R-4.0.2/library/Rcpp'
Warning in install.packages :
installation of package ‘Rcpp’ had non-zero exit status
If I install the compiled version it works, but then I can't use Rcpp:
install.packages("Rcpp")
essai de l'URL 'https://cran.rstudio.com/bin/windows/contrib/4.0/Rcpp_1.0.4.6.zip'
Content type 'application/zip' length 3167452 bytes (3.0 MB)
downloaded 3.0 MB
package ‘Rcpp’ successfully unpacked and MD5 sums checked
The downloaded binary packages are in
C:\Users\toto26\AppData\Local\Temp\RtmpQrssTP\downloaded_packages
> Rcpp::evalCpp("2+2")
C:/Users/toto26/DOCUME~1/R/R-40~1.2/etc/x64/Makeconf:244: warning: overriding recipe for target '.m.o'
C:/Users/toto26/DOCUME~1/R/R-40~1.2/etc/x64/Makeconf:237: warning: ignoring old recipe for target '.m.o'
"C:/rtools40//mingw64/bin/"g++ -std=gnu++11 -I"C:/Users/toto26/DOCUME~1/R/R-40~1.2/include" -DNDEBUG -I"C:/Users/toto26/Documents/R/R-4.0.2/library/Rcpp/include" -I"C:/Users/toto26/AppData/Local/Temp/RtmpQrssTP/sourceCpp-x86_64-w64-mingw32-1.0.4.6" -O2 -Wall -mfpmath=sse -msse2 -mstackrealign -c file477046ea4828.cpp -o file477046ea4828.o
"C:/rtools40//mingw64/bin/"g++ -std=gnu++11 -shared -s -static-libgcc -o sourceCpp_2.dll tmp.def file477046ea4828.o -LC:/Users/toto26/DOCUME~1/R/R-40~1.2/bin/x64 -lR
/usr/bin/sh: line 8: "C:/rtools40//mingw64/bin/"g++ -std=gnu++11 : No such file or directory
Error in sourceCpp(code = code, env = env, rebuild = rebuild, cacheDir = cacheDir, :
Error occurred building shared library.
Any hints would be more than welcome.
Thanks a lot in advance.
Emmanuel
edit: here is my path:
> Sys.getenv('PATH')
[1] "C:\\rtools40\\usr\\bin;C:\\Users\\toto26\\Documents\\R\\R-4.0.2\\bin\\x64;C:\\rtools40\\usr\\bin;C:\\rtools40\\mingw64\\bin;C:\\ProgramData\\DockerDesktop\\version-bin;C:\\Program Files\\Docker\\Docker\\Resources\\bin;C:\\RBuildTools\\bin;C:\\RBuildTools\\mingw_64\\bin;C:\\RBuildTools\\mingw_32\\bin;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files\\MATLAB\\R2018a\\runtime\\win64;C:\\Program Files\\MATLAB\\R2018a\\bin;C:\\Program Files (x86)\\PDFtk\\bin\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\rtools40\\usr\\bin;C:\\RBuildTools\\3.5\\mingw_64\\bin;C:\\RBuildTools\\3.5\\bin;C:\\Python\\Scripts\\;C:\\Python\\;C:\\Users\\toto26\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\toto26\\AppData\\Local\\atom\\bin;C:\\Users\\toto26\\AppData\\Local\\Programs\\MiKTeX 2.9\\miktex\\bin\\x64\\;C:\\RBuildTools\\3.3\\mingw_64\\bin;C:\\MinGW\\bin;C:\\Users\\toto26\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\toto26\\AppData\\Local\\Pandoc\\;"

Tensorflow won't build with CUDA support

I've tried building tensorflow from source as described in the installation guide. I've had success building it with cpu-only support and with the SIMD instruction sets, but I've run into trouble trying to build with CUDA support.
System information:
Mint 18 Sarah
4.4.0-21-generic
gcc 5.4.0
clang 3.8.0
Python 3.6.1
Nvidia GeForece GTX 1060 6GB (Compute capability 6.1)
CUDA 8.0.61
CuDNN 6.0
Here's my attempt at building with CUDA, gcc, and SIMD:
kevin#yeti-mint ~/src/tensorflow $ bazel clean
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.
kevin#yeti-mint ~/src/tensorflow $ ./configure
You have bazel 0.5.2 installed.
Please specify the location of python. [Default is /home/kevin/.pyenv/shims/python]:
Found possible Python library paths:
/home/kevin/.pyenv/versions/tensorflow/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/home/kevin/.pyenv/versions/tensorflow/lib/python3.6/site-packages]
/home/kevin/.pyenv/versions/3.6.1/lib/python3.6
Do you wish to build TensorFlow with MKL support? [y/N]
No MKL support will be enabled for TensorFlow
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? [Y/n]
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N]
No VERBS support will be enabled for TensorFlow
Do you wish to build TensorFlow with OpenCL support? [y/N]
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Do you want to use clang as CUDA compiler? [y/N]
nvcc will be used as CUDA compiler
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]:
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 6.0]:
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "6.1"]:
Do you wish to build TensorFlow with MPI support? [y/N]
MPI support will not be enabled for TensorFlow
Configuration finished
kevin#yeti-mint ~/src/tensorflow $ bazel build --config=opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --verbose_failures //tensorflow/tools/pip_package:build_pip_package
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': Use SavedModel Builder instead.
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': Use SavedModel instead.
INFO: Found 1 target...
ERROR: /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/external/protobuf/BUILD:244:1: C++ compilation of rule '#protobuf//:js_embed' failed: crosstool_wrapper_driver_is_not_gcc failed: error executing command
(cd /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/execroot/org_tensorflow && \
exec env - \
PATH=/home/kevin/.pyenv/shims:/home/kevin/.pyenv/shims:/home/kevin/.pyenv/bin:/home/kevin/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/kevin/.local/bin \
PWD=/proc/self/cwd \
external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 '-std=c++11' -g0 -MD -MF bazel-out/host/bin/external/protobuf/_objs/js_embed/external/protobuf/src/google/protobuf/compiler/js/embed.d '-frandom-seed=bazel-out/host/bin/external/protobuf/_objs/js_embed/external/protobuf/src/google/protobuf/compiler/js/embed.o' -iquote external/protobuf -iquote bazel-out/host/genfiles/external/protobuf -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -isystem external/bazel_tools/tools/cpp/gcc3 -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fno-canonical-system-headers -c external/protobuf/src/google/protobuf/compiler/js/embed.cc -o bazel-out/host/bin/external/protobuf/_objs/js_embed/external/protobuf/src/google/protobuf/compiler/js/embed.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 2.
python: can't open file 'external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc': [Errno 2] No such file or directory
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 5.578s, Critical Path: 0.06s
Turning off all extra flags:
kevin#yeti-mint ~/src/tensorflow $ bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_packageWARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': Use SavedModel Builder instead.
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': Use SavedModel instead.
INFO: Found 1 target...
ERROR: /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/external/fft2d/BUILD.bazel:21:1: C++ compilation of rule '#fft2d//:fft2d' failed: crosstool_wrapper_driver_is_not_gcc failed: error executing command
(cd /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/execroot/org_tensorflow && \
exec env - \
PATH=/home/kevin/.pyenv/shims:/home/kevin/.pyenv/shims:/home/kevin/.pyenv/bin:/home/kevin/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/kevin/.local/bin \
PWD=/proc/self/cwd \
external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 -MD -MF bazel-out/host/bin/external/fft2d/_objs/fft2d/external/fft2d/fft/fftsg.d -iquote external/fft2d -iquote bazel-out/host/genfiles/external/fft2d -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -isystem external/bazel_tools/tools/cpp/gcc3 -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fno-canonical-system-headers -c external/fft2d/fft/fftsg.c -o bazel-out/host/bin/external/fft2d/_objs/fft2d/external/fft2d/fft/fftsg.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 2.
python: can't open file 'external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc': [Errno 2] No such file or directory
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 3.522s, Critical Path: 2.42s
Trying with clang instead:
kevin#yeti-mint ~/src/tensorflow $ ./configure
You have bazel 0.5.2 installed.
Please specify the location of python. [Default is /home/kevin/.pyenv/shims/python]:
Found possible Python library paths:
/home/kevin/.pyenv/versions/tensorflow/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/home/kevin/.pyenv/versions/tensorflow/lib/python3.6/site-packages]
/home/kevin/.pyenv/versions/3.6.1/lib/python3.6
Do you wish to build TensorFlow with MKL support? [y/N]
No MKL support will be enabled for TensorFlow
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? [Y/n]
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N]
No VERBS support will be enabled for TensorFlow
Do you wish to build TensorFlow with OpenCL support? [y/N]
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Do you want to use clang as CUDA compiler? [y/N] y
Clang will be used as CUDA compiler
Please specify which clang should be used as device and host compiler. [Default is /usr/bin/clang]:
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]:
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 6.0]:
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "6.1"]:
Do you wish to build TensorFlow with MPI support? [y/N]
MPI support will not be enabled for TensorFlow
Configuration finished
kevin#yeti-mint ~/src/tensorflow $ bazel build --config=opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-msse4.2 --verbose_failures //tensorflow/tools/pip_package:build_pip_package
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': Use SavedModel Builder instead.
WARNING: /home/kevin/src/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': Use SavedModel instead.
INFO: Found 1 target...
~1300 lines of build warnings and info...
ERROR: /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/external/nccl_archive/BUILD:33:1: C++ compilation of rule '#nccl_archive//:nccl' failed: clang failed: error executing command
(cd /home/kevin/.cache/bazel/_bazel_kevin/b937ae7b9a1087aeb7862ab37155238c/execroot/org_tensorflow && \
exec env - \
CLANG_CUDA_COMPILER_PATH=/usr/bin/clang \
CUDA_TOOLKIT_PATH=/usr/local/cuda \
CUDNN_INSTALL_PATH=/usr/local/cuda-8.0 \
PWD=/proc/self/cwd \
PYTHON_BIN_PATH=/home/kevin/.pyenv/shims/python \
PYTHON_LIB_PATH=/home/kevin/.pyenv/versions/3.6.1/lib/python3.6 \
TF_CUDA_CLANG=1 \
TF_CUDA_COMPUTE_CAPABILITIES=6.1 \
TF_CUDA_VERSION=8.0 \
TF_CUDNN_VERSION=6 \
TF_NEED_CUDA=1 \
TF_NEED_OPENCL=0 \
/usr/bin/clang '-march=native' -mavx -mavx2 -mfma -msse4.2 '-march=native' -MD -MF bazel-out/local_linux-py3-opt/bin/external/nccl_archive/_objs/nccl/external/nccl_archive/src/reduce.cu.pic.d '-frandom-seed=bazel-out/local_linux-py3-opt/bin/external/nccl_archive/_objs/nccl/external/nccl_archive/src/reduce.cu.pic.o' -iquote external/nccl_archive -iquote bazel-out/local_linux-py3-opt/genfiles/external/nccl_archive -iquote external/local_config_cuda -iquote bazel-out/local_linux-py3-opt/genfiles/external/local_config_cuda -iquote external/bazel_tools -iquote bazel-out/local_linux-py3-opt/genfiles/external/bazel_tools -isystem external/local_config_cuda/cuda -isystem bazel-out/local_linux-py3-opt/genfiles/external/local_config_cuda/cuda -isystem external/local_config_cuda/cuda/include -isystem bazel-out/local_linux-py3-opt/genfiles/external/local_config_cuda/cuda/include -isystem external/bazel_tools/tools/cpp/gcc3 '-std=c++11' -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fPIC -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wno-invalid-partial-specialization -fno-omit-frame-pointer -no-canonical-prefixes -DNDEBUG -g0 -O2 -ffunction-sections -fdata-sections '-DCUDA_MAJOR=0' '-DCUDA_MINOR=0' '-DNCCL_MAJOR=0' '-DNCCL_MINOR=0' '-DNCCL_PATCH=0' -Iexternal/nccl_archive/src -O3 -x cuda '-DGOOGLE_CUDA=1' '--cuda-gpu-arch=sm_61' -c bazel-out/local_linux-py3-opt/genfiles/external/nccl_archive/src/reduce.cu.cc -o bazel-out/local_linux-py3-opt/bin/external/nccl_archive/_objs/nccl/external/nccl_archive/src/reduce.cu.pic.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
clang: error: Unsupported CUDA gpu architecture: sm_61
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 25.030s, Critical Path: 12.66s
This is consistent behavior on the current master branch (31aa360), r1.2 (5d8c0a6), and r1.1 (8ddd727). I've seen many github issues (8790, 9651, 10367) and a stack overflow post or two (here, I tried using gcc/g++ 4.8), but they all seem to be solved and/or slightly unrelated to my problem.

Error Building Giraffe (Chess Program): /opt/local/bin/as: assembler (/opt/local/bin/clang) not installed

I'm trying to build the Giraffe Deep Belief Chess Playing program which I downloaded from Mercurial.
From README.md:
Tested on Linux (GCC 4.9), OS X (GCC 4.9), Windows (MinGW-W64 GCC 5.1). GCC versions earlier than 4.8 are definitely NOT supported, due to broken regex implementation in libstdc++.
Here's the error:
David-Laxers-MacBook-Pro:giraffe davidlaxer$ make
g++ -Wall -Wextra -Wno-unused-function -std=gnu++11 -mtune=native -Wa,-q -ffast-math -pthread -fopenmp -DHGVERSION="\"efceca80bf74\"" -O3 -march=native -Wa,-q -I. -c backend.cpp -o obj/backend.o
/opt/local/bin/as: assembler (/opt/local/bin/clang) not installed
make: *** [obj/backend.o] Error 1
I'm on OS X 10.10.5.
port select --list clang
Available versions for clang:
mp-clang-3.5
mp-clang-3.7
none (active)
David-Laxers-MacBook-Pro:giraffe davidlaxer$ ls -l /opt/local/bin/as
-r-xr-xr-x 1 root admin 28012 Feb 15 2015 /opt/local/bin/as
David-Laxers-MacBook-Pro:giraffe davidlaxer$ /opt/local/bin/as -v
Apple Inc version cctools-862, GNU assembler version 1.38
David-Laxers-MacBook-Pro:giraffe davidlaxer$ file /opt/local/bin/as
/opt/local/bin/as: Mach-O 64-bit executable x86_64
David-Laxers-MacBook-Pro:giraffe davidlaxer$ g++ --version
g++ (MacPorts gcc49 4.9.3_0) 4.9.3
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
From Makefile:
#CXX=g++-4.9 # I changed this - dbl
CXX=g++
# this is used to build gtb only
CC=gcc-4.9
HGVERSION:= $(shell hg parents --template '{node|short}')
CXXFLAGS_BASE = \
-Wall -Wextra -Wno-unused-function -std=gnu++11 -mtune=native -Wa,-q -ffast-math \
-pthread -fopenmp -DHGVERSION="\"${HGVERSION}\""
Here is what I did to solve this issue using Brew,
$ clang
zsh: no such file or directory: clang
$ brew install llvm --with-clang --with-asan
==> Downloading http://llvm.org/releases/3.5.0/llvm-3.5.0.src.tar.xz
######################################################################## 100.0%
🍺 /usr/local/Cellar/llvm/3.5.0: 1235 files, 171M, built in 16.8 minutes
$ clang -v
Apple LLVM version 6.0 (clang-600.0.57) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin13.4.0
Thread model: posix

Make execvp permission denied

I've got a makefile I'm trying to run without too much luck. Here's what happens
I try to make, and it start out ok. It then gives an error that it can't find the file. However, I can do an ls -ld on the file without any problem. Do you have any idea whats going on?
pgr#pgr:~/start_code_1$ make
gcc -Wall -g -m32 -c -fomit-frame-pointer -O2 -fno-builtin bootblock.s
ld -nostartfiles -nostdlib -melf_i386 -Ttext 0x0 -o bootblock bootblock.o
gcc -c -o createimage.o createimage.c
gcc -o createimage createimage.o
gcc -Wall -g -m32 -c -fomit-frame-pointer -O2 -fno-builtin kernel.s
ld -nostartfiles -nostdlib -melf_i386 -Ttext 0x1000 -o kernel kernel.o
./createimage.given --extended ./bootblock ./kernel
make: ./createimage.given: Command not found
make: *** [image] Error 127
pgr#pgr:~/start_code_1$ ls -ld ./createimage.given
-rwxr-xr-x 1 pgr pgr 26110 Sep 16 13:03 ./createimage.given
UPDATED
pgr#pgr:~/workspace/318/bootloader$ file createimage.given
createimage.given: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped
UPDATE 2
pgr#pgr:~/phdvdev/workspace/318/bootloader$ ldd createimage.given
not a dynamic executable
Most likely your createimage.given script has wrong interpreter in shebang line. And the chances are it's been edited on windows machine and has trailing carriage return :)

Run valgrind in 32 bit mode on Mac 10.6?

If I compile a 32 bit executable in Mac 10.6, with the -m32 flag, like:
gcc -m32 test.c -o test
running valgrind on "test" fails with the error:
valgrind: ./test: cannot execute binary file
Are there any flags to pass to valgrind to execute this? Is the only option to compile valgrind in 32 bit mode?
Thanks
Blender, the -m32 flag just means to compile the file in 32-bit mode. Mac 10.6 runs 32-bit executables just fine.
I had this problem too, with valgrind built/installed by MacPorts. When I built it myself, however, the problem went away. I can confirm that a default build of valgrind with no extra configure options supports both 32-bit and 64-bit programs on Snow Leopard (used version 3.6.1.)
What version of Valgrind are you having trouble with?
On Linux and MacOS, a single build of Valgrind can automatically detect and do the right thing for both 32 and 64-bit binaries.
Here is what I see on Mac OS X 10.6.7 (10J869):
$ echo "int main() { free(1); return 0; }" | gcc -xc - -g -o a.out
$ echo "int main() { free(1); return 0; }" | gcc -xc - -g -o a.out32 -m32
$ valgrind --version
valgrind-3.7.0.SVN
$ valgrind ./a.out
==46102== Memcheck, a memory error detector
==46102== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==46102== Using Valgrind-3.7.0.SVN and LibVEX; rerun with -h for copyright info
==46102== Command: ./a.out
==46102==
--46102-- ./a.out:
--46102-- dSYM directory is missing; consider using --dsymutil=yes
==46102== Invalid free() / delete / delete[] / realloc()
==46102== at 0x100010E9F: free (vg_replace_malloc.c:366)
==46102== by 0x100000F26: main (in ./a.out)
==46102== Address 0x1 is not stack'd, malloc'd or (recently) free'd
==46102==
==46102==
==46102== HEAP SUMMARY:
==46102== in use at exit: 88 bytes in 1 blocks
==46102== total heap usage: 1 allocs, 1 frees, 88 bytes allocated
==46102==
==46102== LEAK SUMMARY:
==46102== definitely lost: 0 bytes in 0 blocks
==46102== indirectly lost: 0 bytes in 0 blocks
==46102== possibly lost: 0 bytes in 0 blocks
==46102== still reachable: 0 bytes in 0 blocks
==46102== suppressed: 88 bytes in 1 blocks
==46102==
==46102== For counts of detected and suppressed errors, rerun with: -v
==46102== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
$ valgrind ./a.out32
==46103== Memcheck, a memory error detector
==46103== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==46103== Using Valgrind-3.7.0.SVN and LibVEX; rerun with -h for copyright info
==46103== Command: ./a.out32
==46103==
--46103-- ./a.out32:
--46103-- dSYM directory is missing; consider using --dsymutil=yes
==46103== Invalid free() / delete / delete[] / realloc()
==46103== at 0xF7D8: free (vg_replace_malloc.c:366)
==46103== by 0x1F7B: main (in ./a.out32)
==46103== Address 0x1 is not stack'd, malloc'd or (recently) free'd
==46103==
==46103==
==46103== HEAP SUMMARY:
==46103== in use at exit: 320 bytes in 7 blocks
==46103== total heap usage: 7 allocs, 1 frees, 320 bytes allocated
==46103==
==46103== LEAK SUMMARY:
==46103== definitely lost: 0 bytes in 0 blocks
==46103== indirectly lost: 0 bytes in 0 blocks
==46103== possibly lost: 0 bytes in 0 blocks
==46103== still reachable: 260 bytes in 6 blocks
==46103== suppressed: 60 bytes in 1 blocks
==46103== Rerun with --leak-check=full to see details of leaked memory
==46103==
==46103== For counts of detected and suppressed errors, rerun with: -v
==46103== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)