Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I would be interested to try a GPU emulator, but I have tried to use Multi2Sim, GPGPU-sim, and Ocelot, and for each of these three emulators I get a problem for which it seems hard to find a solution on the internet. I will describe the problem I have with each emulator and maybe you can help. First of all, to give you some detailed context, I am using Ubuntu 12.04 LTS.
Multi2Sim says that it is not compatible with 64-bit and so you should compile for 32-bit. If I compile CUDA code for 32-bit, then when I run the compiled executable, I get the error message "CUDA driver version is insufficient for CUDA runtime version." If I compile OpenCL code for 32-bit, then when I run the compiled executable, I find that the function clGetPlatformIDs does not give me the Nvidia OpenCL platform that I get when I compile for 64-bit.
The documentation for GPGPU-sim says:
We have tested OpenCL on GPGPU-Sim using NVIDIA driver version 256.40
http://developer.download.nvidia.com/compute/cuda/3_1/drivers/devdriver_3.1_linux_64_256.40.run
Note the most recent version of the NVIDIA driver produces PTX that is incompatible with this version of GPGPU-Sim.
I have NVIDIA Driver Version 295.49. When I look in "Additional Drivers" from "System Settings" I see two things listed: "NVIDIA accelerated graphics driver (version current) [Recommended]" and "NVIDIA accelerated graphics driver (post-release updates) (version current-updates)". The first one was activated, so I clicked Remove and then the second one automatically became activated. So I decided to just try installing version 256.40 and I got this error message which simply intimidates me:
ERROR: If you are using a Linux 2.4 kernel, please make sure
you either have configured kernel sources matching your
kernel or the correct set of kernel headers installed
on your system.
If you are using a Linux 2.6 kernel, please make sure
you have configured kernel sources matching your kernel
installed on your system. If you specified a separate
output directory using either the "KBUILD_OUTPUT" or
the "O" KBUILD parameter, make sure to specify this
directory with the SYSOUT environment variable or with
the equivalent nvidia-installer command line option.
Depending on where and how the kernel sources (or the
kernel headers) were installed, you may need to specify
their location with the SYSSRC environment variable or
the equivalent nvidia-installer command line option.
When I try to build Ocelot, I get the following, even though I followed the instructions "To pull from the LLVM SVN and build":
ocelot/ocelot/ir/implementation/ExternalFunctionSet.cpp:27:36: fatal error: llvm/Target/TargetData.h: No such file or directory
Related
I am trying to execute the following basic psychopy (version 2021.2.3) code through the Python 3.6.13 console:
import psychopy.visual as pv
pv.Window()
Which gives the following error.
*** stack smashing detected ***: <unknown> terminated
Aborted (core dumped)
The only related topic I can find is on the psychopy forum with no answers
I'm running this on an Ubuntu 18.04.05 machine. The machine is initialized as a headless server, but I am trying to run this through RDP.
I installed psychopy using pip inside a conda environment. Initially I was getting errors related to wxPython. When I manually installed wxPython from a whl file that error was resolved and error in this question appeared.
My guess is that this is to do with the version of one of the libraries but very hard to tell from this limited info.
What version of PsychoPy are you trying to install and by what means are you installing it? Are you running from the app or is this a script you're trying to launch from the terminal? ie. trying to work out what parts of PsychoPy this is affecting (the app, the visual lib, the gui libs...?)
I had a very similar problem on ubuntu 18.04.5, psychopy 3.2.4, no headless server. I suspect that the problem is general but ubuntu/drivers related.
My solution was to update the system and to install&switch to a proprietary NVIDIA driver.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm working on a computer on which Nvidia drivers and Cuda were installed by someone else so I don't know the method they used to install them.
In the /usr/local/ there were two directories cuda and cuda.10.0. Running nvidia-smi would output:
CUDA Version: 11.0
which made me believe two cuda versions were installed on the system which were causing some errors.
following this question I removed cuda by first doing:
sudo apt-get --purge remove "*cublas*" "cuda*" "nsight*"
and then doing
sudo rm -rf /usr/local/cuda*
(I did not uninstsall nvidia-drivers and Driver Version: 450.80.02 is installed).
Running nvidia-smi still outputs:
CUDA Version: 11.0
How do I uninstall cuda 11? I prefer to have cuda 10 and I can't find where cuda 11 is installed.
Do I need to uninstall nvidia-drivers as well?
The nvidia-smi command does not show which version of CUDA is installed, it shows which CUDA version the installed nVidia driver supports, so there is no problem here, just the incorrect interpretation of the output of this command.
Even if you remove all CUDA installations, nvidia-smi would still show the maximum CUDA version that you can use with this driver.
I am working on a Python script (I use Python 3.7.3) that uses tensorflow-gpu (1.14.0) and used PyInstaller 3.5 to convert this script to an executable. I am using CUDA 10.0 and cuDNN 7.6.1 and my graphics card is a NVIDIA GeForce GTX 960M. I recently deinstalled CUDA to test if the executable of the Python script still runs and surprisingly it still runs via GPU, which does not work when I now run the Python script directly.
My question is, can this executable be run on systems without the CUDA toolkit but with a CUDA-capable graphics card?
According to this documentation PyInstaller will make and store a private copy of all of the dependent external libraries which Python code relies on when building a single file executable.
Therefore it is safe to assume that your executable runs irrespective of the installation status of the CUDA toolkit because it has a full private copy of the necessary CUDA libraries internally which it uses when the executable is run.
According to the GitHub issues in the official repository (here and here for example) CUDA libraries are usually dynamically loaded at run-time and not at link-time, so they are typically not included in the final exe file (or folder) with the result that the exe file won't work on a machine without CUDA installed. The solution (please refer to the linked issues too) is to put the DLLs necessary to run the exe in its dist folder (if generated without the --onefile option) or install the CUDA runtime on the target machine.
The behaviour that you're experimenting maybe it's due to the specific version of TF, that loads the libraries in a different fashion with respect to what described above, but it's not the expected behaviour nowadays.
I want to build a sample included in Vulkan SDK.
I downloaded the SDK from http://vulkan.lunarg.com and install it.
Then I opened the Visual Studio (I have a 2013 version), I open the solution from this path: (C:\VulkanSDK\1.0.13.0\Demos). I choose DEMOS.sln file. Then when I click on LocalWindowsDebugger this message pops up:
vkCreateInstance Failure:
vkEnumerateInstanceExtensionProperties failed to find the VK_KHR_surface extention.
Do you have a compatible Vulkan installable client driver (ICD) insatalled? Please look at the Getting Started guide for additional information.
I have never worked with vulkan, but as it named "Demo", I think that everything inside of it should be set in order to work.
I searched the web, but as it is new, there are few resources talked about it.
What is ICD? and how to install it? (is it different from installer of VulkanSDK?) OR this error is about completely different property? like setting properties of VisualStudio?
ICD is basically your GPU driver...
Both the SDK and driver install vulkaninfo app. Use it to determine what extensions you have and whatnot.
BTW Some time ago AMD drivers forgot to export the extensions like VK_KHR_surface. Make sure you are using the latest driver (16.5.2.1 on AMD and 365.19 on NVIDIA as of time of writing).
Also you need supported GPU. Consult:
NVIDIA supported GPUs
AMD supported GPUs
Khronos maintained list
BTW: All the demos work for me.
Do you have a compatible Vulkan installable client driver (ICD) installed?
This message tells you that Vulkan's loader wasn't able to find a Vulkan driver on your device. ICD is the installable client driver that comes with the driver of your graphics card.
What GPU are you using and do you have a driver installed that actually supports Vulkan? Note that while your card may support OpenGL it may not support Vulkan.
Check NVIDIA(performance mode) is selected at "Nvidia X Server Settings" application if you are using ubuntu.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I work on Windows 7 & I must use the ns3 network simulator, so I install the Cygwin emulator. I successfully download the ns3 from the official cite and install it, but six modules wasn't been installed. I think it's because when I install Cygwin, I didn't install some packages, and these ns3 modules needs in this packages. So, there was the message:
Modules not built (see ns-3 tutorial for explanation):
brite click fd-net-device
openflow tap-bridge visualizer
I found the ns3 tutorial (pdf file on the off. cite), but I can't find in these document what should I do to successfully built these modules! There is just an installation steps and some examples! I tried to run the test & I the test return me
0 of 0 tests passed (0 passed, 0 skipped, 0 failed, 0 crashed, 0 valgrind errors)
So, I think I will not be able to work with ns3 for now, because no one tests can be passed! Where can I read about what I must to update/install in Cygwin (maybe, install other version of gcc, or install java etc) to build more modules successfully?
Those modules require another additional software besides NS3 since these modules developed by integrating/using another projects.
You can find how to install and using it in their respected NS3 module documentation page or NS3 Wiki
Brite : https://www.nsnam.org/docs/models/html/brite.html
OpenFlow : https://www.nsnam.org/docs/models/html/openflow-switch.html
Click : https://www.nsnam.org/docs/models/html/click.html
visualizer : https://www.nsnam.org/wiki/PyViz
Tap-Bridge : required to build in linux based OS
fd-net-device : required to build in linux based OS
TapBridge and emulation features aspects depend on Linux and those components are not enabled on the Windows via Cygwin. If you want to serious work on NS3 and still need Windows, consider virtualization of a popular Linux platform https://www.nsnam.org/wiki/HOWTO_use_VirtualBox_to_run_simulations_on_Windows_machines and https://www.nsnam.org/wiki/HOWTO_use_VMware_to_set_up_virtual_networks_%28Windows%29
Unless, you need these modules. You can skip it. It is not required to have it.