How to get cmake to enable cuda when compiling yolo (darknet)? - cmake

I am currently using the cmake-gui to compile yolo darknet at https://github.com/AlexeyAB/darknet.git. However, it will not enable cuda and I am having a few other odd issues. These include when I run darknet.exe from the Release folder after building it using VS2017, it states that it cannot find pthreadVC2.dll or opencv_world410.dll.
To fix the other issues, I copied the exe and those files and put them all in the root folder of the project. This seems to work but I am not sure why it wouldn't work otherwise.
For cuda, I am not sure what to try. I have these system variables and path:
Here is my cmake-gui:
It can be seen that CMAKE_CUDA_COMPILER is NOTFOUND. Which I am thinking is the problem, but I am not sure why it cannot be found. If I run nvcc -V in the command prompt, it returns:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:04_Central_Daylight_Time_2018
Cuda compilation tools, release 10.0, V10.0.130
Also here is the output for cmake configuration:
Selecting Windows SDK version 10.0.17763.0 to target Windows 10.0.17134.
OpenCV ARCH: x64
OpenCV RUNTIME: vc15
OpenCV STATIC: OFF
Found OpenCV 4.1.0 in C:/opencv/build/x64/vc15/lib
You might need to add C:\opencv\build\x64\vc15\bin to your PATH to be able to run your applications.
ZED SDK not enabled, since it requires CUDA
Configuring done
If you have any tips for any of these problems, please let me know. Just an FYI, currently darknet does work and if I test it on dog.jpg, it successfully detects the classes. However, this is of course without Cuda or cudnn and I would like to use these eventually. Thank you! If you need anything else from me please let me know!

Unlike above said, i didn't reinstall CUDA, i just copy 4 files from
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\extras\visual_studio_integration\MSBuildExtensions
to
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations

The answer was given by #Andropogon: CUDA has to be reinstalled after Visual Studio.
This is what we found when I dug into it a bit with my colleague:
Similar to OP, all compilation steps seemed to run without error and generate an executable.
Taking a closer look at cmake, under CMAKE/CMAKE_CUDA_COMPILER it said NOT FOUND, despite nvcc.exe being on the Path. (nvcc --version runs fine in Powershell.) We manually entered the location of nvcc.exe to this option, and now configure comes up with a more helpful error message: No CUDA toolset found. with reference to line numbers in various cmake files. Among those lines was this message, which seems to confirm that Visual Studio (VS) is part of the problem,
if(NOT CMAKE_VS_PLATFORM_TOOLSET_CUDA)
message(FATAL_ERROR "No CUDA toolset found.")
So after reinstalling CUDA the compilation looked more like I would expect - but I still get an executable which doesn't appear to do anything (no output on the command line, no prediction.jpg generated). Anyway, hopefully that can shed a bit of light on the CUDA/VS/cmake issue.

I had the same problem, I tried many ways to make GPU available for transe, and finally cmake started to see CUDA when I reinstalled VS2019 (from disk D to disk C) and reinstalled CUDA in version v.10.1. After this, cmake began to find CUDA, and after compiling the project in VS2019, everything start to work correctly.
Important thing to install Visual Studio firstly and later CUDA.

Related

Making the CMAKE path available to another software

I am trying to install a software package called kinsol, a non-linear equation solver, through ccmake as instructed in its documentation. The package requires a cmake version 3.12 or higher. So, I installed 3.17.3. Now, the problem is that my kinsol installation process is not able to locate ccmake and hence gives the message "ccmake ../kinsol-6.2.0
The program 'ccmake' is currently not installed. You can install it by typing:
sudo apt install cmake-curses-gui". But using the aforementioned command installs the version 3.5, which again fails in the task due to the version requirement of kinsol. I had come across a similar question in this forum and so I followed the workarounds suggested there as in putting the installation path of cmake in the .bashrc file still without any success. Does anyone know how to make the path of cmake known to the software?
Thanks,
DP

Does PyInstaller include CUDA

I am working on a Python script (I use Python 3.7.3) that uses tensorflow-gpu (1.14.0) and used PyInstaller 3.5 to convert this script to an executable. I am using CUDA 10.0 and cuDNN 7.6.1 and my graphics card is a NVIDIA GeForce GTX 960M. I recently deinstalled CUDA to test if the executable of the Python script still runs and surprisingly it still runs via GPU, which does not work when I now run the Python script directly.
My question is, can this executable be run on systems without the CUDA toolkit but with a CUDA-capable graphics card?
According to this documentation PyInstaller will make and store a private copy of all of the dependent external libraries which Python code relies on when building a single file executable.
Therefore it is safe to assume that your executable runs irrespective of the installation status of the CUDA toolkit because it has a full private copy of the necessary CUDA libraries internally which it uses when the executable is run.
According to the GitHub issues in the official repository (here and here for example) CUDA libraries are usually dynamically loaded at run-time and not at link-time, so they are typically not included in the final exe file (or folder) with the result that the exe file won't work on a machine without CUDA installed. The solution (please refer to the linked issues too) is to put the DLLs necessary to run the exe in its dist folder (if generated without the --onefile option) or install the CUDA runtime on the target machine.
The behaviour that you're experimenting maybe it's due to the specific version of TF, that loads the libraries in a different fashion with respect to what described above, but it's not the expected behaviour nowadays.

Latest CMake and LLVM on Windows 10

All
latest LLVM is 7.0 and it is working quite well on Windows 10 x64, building native executables etc.
latest CMake is 3.12.x.
I have VS 2017 Pro installed as well.
Downloaded them both and tried to make simple project with it on Windows, and it didn't work, even if I set CC/CXX, linker pointing to lld, failing on compiling test problem, not finding rc (resource compiler).
Tried targeting GNU make as well as Ninja as build system.
Is this a supported configuration? If yes, how to make it work?
Basically, I would like to use CMake/LLVM with editor/terminal like I'm doing it on Linux
Run CMake from Developer Command Prompt.
That should make rc available in your PATH, and then CMake should be able to find it.

ImportError: Could not find 'cudart64_90.dll'. TensorFlow [duplicate]

I installed CUDA 9.0 because without it, Tensorflow gives the error:
ImportError: Could not find 'cudart64_90.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable.
I have the path variable set to the bin directory of CUDA 9.0 installation, where the required DLL file is present. I tried setting it to its parent directory too. But it still gives me the same error.
I found the solution. And it was the good old advice - "Have you tried turning it off and on again?"
I restarted the computer, Tensorflow found cudart64_90.dll, but now it could not find cudnn64_7.dll. I'm providing the steps ahead to get rid of the issues I encountered.
If you've installed Tensorflow GPU version, you're likely to run into the problem mentioned in the post. Especially if you've not installed NVDIA development toolkits before. Follow these steps:
1. Install CUDA
Get it from here. Install only the version mentioned in Tensorflow's ImportError.
ImportError: Could not find 'cudart64_90.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Download and install CUDA 9.0 from this URL: https://developer.nvidia.com/cuda-toolkit
It explicitly tells you the version number. Initially, I installed CUDA 9.1 instead of 9.0, it didn't work. The installation on Windows is straight forward. Run the .exe, uncheck NVIDIA Geforce and other packages if you already have them installed.
2. Include CUDA path in PATH variable
Point it to the bin directory of your tensorflow installation.
Check here if you don't know how to set the PATH variable. Now try importing Tensorflow, if it still doesn't work, try rebooting the system.
Now you will likely run into the error:
ImportError: Could not find 'cudnn64_7.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Note that installing cuDNN is a separate step from installing CUDA, and this DLL is often found in a different directory from the CUDA DLLs. You may install the necessary DLL by downloading cuDNN 7 from this URL: https://developer.nvidia.com/cudnn
3. Install cuDNN
Once again, only install the version mentioned in the error.To get the installer, you need to have an NVIDIA developer account. If you don't have it, sign up and it will direct you to the link to download cuDNN. Select the version compatible with your CUDA version (it's in the package name). Download the zip archive. Extract it somewhere on your disk.
4. Include cuDNN path in PATH variable
Similar to step two. This time, point it to the bin directory in your extracted archive of cuDNN. Now import Tensorflow. Restart system if required.
It should now work.
When I reached to step 3, I copied and pasted the files inside the archive
as in here
Copy <installpath>\cuda\bin\cudnn64_7.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin.
Copy <installpath>\cuda\ include\cudnn.h to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\include.
Copy <installpath>\cuda\lib\x64\cudnn.lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\lib\x64.
I had the same problem for a couple hours as well. Just restarted my computer and that fixed the issue you were having, so give that a try.
Always check the cuda version, in this case you have to install cuda version 9.0 this will create the cudart64_90.dll file in C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin folder.
This will surely work.

SCIP Python Installation Issue Windows with pip

Hello community / developers,
I am currently trying to install SCIP with python and found that there is Windows Support and a pip installer based on https://github.com/SCIP-Interfaces/PySCIPOpt/blob/master/INSTALL.md.
Nevertheless I run into a problem "Cannot open include file"
Below is a list of the things I performed to get to this step.
Download Python Anaconda 2.7 64 bit
Install with all checkboxes as they are
Download PyCharm Community edition
Click 64 bit desktop link, and associate with .py checkboxes
Open CMD > write: easy_install -U pip
Download Visual C++ Compiler for Python 2.7
Setup folder structure and downloaded header files
CMD > pip install pyscipopt leads to error:
C:\Users\UserName\Downloads\SCIPOPTDIR\include\scip/def.h(32) : fatal error C1083: Cannot open include file: 'stdint.h': No such file or directory
error: command 'C:\Users\UserName\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe' failed with exit status 2
My environment variables and folder directory can be found here:
http://imgur.com/a/mJRva
Help is very much appreciated,
Kind regards
The error message says your missing "stdint.h". This is because you don't have a recent Visual Studio version. You probably use the one that came with your Python installation. Try installing the latest Visual Studio to fix this issue.
You might want to look at this question:
Why Microsoft Visual Studio cannot find <stdint.h>?
PySCIPOpt needs a C/C++ linker to build the Python module - although it's already precompiled on PyPI.
Alright, I figured it out. I needed to
(1) Install Python 3.6 instead of Python 2.7 (both Anaconda)
(2) Afterwards pip installation worked
(3) I moved the library files in the lib folder
(4) Now I can execute the examples.
Interestingly, I get an unresolved reference error although the code works fine (I assume this is a bug of Pycharm/scipy?) Link to picture: https://www.dropbox.com/s/d8pf6dkwuz9cwto/scip_python.png?dl=0