How to find the JetPack version of NVIDIA Jetson Device? - nvidia-jetson

Is there a way to find the version of the currently installed JetPack on my NVIDIA Jetson Xavier AGX kit?

To get the JetPack version, architecture and dependencies,
sudo apt-cache show nvidia-jetpack
#Package: nvidia-jetpack
#Version: 4.4.1-b50
#Architecture: arm64
#Maintainer: NVIDIA Corporation
#Installed-Size: 194
#Depends: nvidia-cuda (= 4.4.1-b50), nvidia-opencv (= 4.4.1-b50), nvidia-cudnn8 (= 4.4.1-b50)
For the version specifically,
sudo apt-cache show nvidia-jetpack | grep "Version"
#Version: 4.4.1-b50

git clone https://github.com/jetsonhacks/jetsonUtilities.git
cd jetsonUtilities
python jetsonInfo.py
Output:
NVIDIA Jetson Nano (Developer Kit Version)
L4T 32.5.1 [ JetPack 4.5.1 ]
Ubuntu 18.04.5 LTS
Kernel Version: 4.9.201-tegra
CUDA 10.2.89
CUDA Architecture: 5.3
OpenCV version: 3.4.17-dev
OpenCV Cuda: YES
CUDNN: 8.0.0.180
TensorRT: 7.1.3.0
Vision Works: 1.6.0.501
VPI: ii libnvvpi1 1.0.15 arm64 NVIDIA Vision Programming Interface library
Vulcan: 1.2.70

Related

How to install cuda on Jetson AGX Xavier?

We have a Nvidia Jetson NGX and our cuda installation broke after working for a while after accidentally updating "sudo apt update".
We were not sure how to install cuda onto the jetson without reflashing it.

Tensorflow + Pytorch install Cudatoolkit 11.2

I have a Windows 10 machine with an nvidia 3080 card. 11.2 with CudaToolkit
I want to install a Pytorch in addition to TensorFlow, which works 100% fine so far.
If I understand the description correctly, the CudaToolkit installed without the Cuda Python env is “independent” of the Cuda toolkit version installed for Windows.
I tried to install Pytorch with this command, but it does not want to recognize the GPU.
pip install torch==1.9.0+cu102 torchvision==0.10.0+cu102 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

Mozilla TTS in PowerShell: "UserWarning: NVIDIA GeForce RTX 3060 Ti with CUDA capability sm_86 is not compatible with the current PyTorch installation

I am trying to run ./TTS/bin/train_tacotron.py with GPU in Powershell.
I followed these instructions, which got me pretty far: the config is read, the model restored, but as training is about to start, I get the message:
UserWarning: NVIDIA GeForce RTX 3060 Ti with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the NVIDIA GeForce RTX 3060 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
The instructions specified don't really help. I tried installing the most recent stable version of PyTorch, as well as trying 1.7.1 (as opposed to 1.8.0 as recommended in the instructions I linked), but I got the same message.
How can I get this to run on my GPU?
Side note: I was successfully able to run training on my GPU in WSL, but it froze after a few hundred epochs, so I wanted to try Powershell to see if it made a difference.
In order to work properly with your current CUDA version, you need to specify the version 11.3 to cudatoolkit. Execute the following commands:
conda uninstall cudatoolkit
conda install cudatoolkit=11.3 -c pytorch

Can I install cuda 10.2 for using tensorflow 2.1 or it has to be cuda 10.1?

Can I install cuda 10.2 for using tensorflow 2.1 or it has to be cuda 10.1?
I am using ubuntu 18.04 and I have a NVIDIA Quadro P5000.
Providing the solution here (Answer Section), even though it is present in the Comment Section, for the benefit of the community.
No, as per Tensorflow documentation, TensorFlow supports CUDA 10.1 (TensorFlow >= 2.1.0), please refer compatible version details
Pytorch need CUDA 10.2 but Tensorflow need cuda 10.1. Is it a joke?
No, you can use cuda version 10.2 with tensorflow 2.0.
It is quite simple.
WHY:
When run "import tensorflow", the tensorflow will search a library named 'libcudart.so.$.$' in LD_LIBRARY_PATH. For tensorflow 2.1.0-2.3.0 with cuda 10.1, it's 'libcudart.so.10.1'. With cuda 10.2, we don't have 'libcudart.so.10.1', so there will be a error.
In fact there are not any difference between cuda 10.1 and cuda 10.2, so we can solve this problem through the soft links.
HOW
cd /usr/local/cuda-10.2/targets/x86_64-linux/lib/
ln -s libcudart.so.10.2.89 libcudart.so.10.1
/usr/local/cuda-10.2/extras/CUPTI/lib64
ln -s libcupti.so.10.2.75 libcupti.so.10.1
cd /usr/local/cuda-10.2/lib64
ln -s libcudnn.so.8 libcudnn.so.7
vim /etc/profile
export CUDA_HOME=/usr/local/cuda-10.2
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${CUDA_HOME}/extras/CUPTI/lib64
export PATH=${CUDA_HOME}/bin:${PATH}
source /etc/profile
Click the button to see the picture.
Done!

tensorflow-gpu - pycharm doesn't recognize GPU

i've imported tensorflow gpu, but pycharm doesn't recognize it
details:
IDE -pycharm
GPU- grid p40-1Q
cuda - 8
cudnn - 7.1
python - 3.5
i'm getting this message:
Have you checked with
nvidia-smi
and see if your graphic driver is working? Also, have you checked the version of your tensorflow-gpu ? check for compatible configurations here https://www.tensorflow.org/install/source#tested_source_configurations
I tried with the following configuration on pycharm with windows 10 and it worked!!
Anaconda navigator with python 3.6.8
CUDA 9.0
CUDNN 7.1
Nvidia GeForce 1050 Ti GPU
Follow the instructions mentioned in https://stackoverflow.com/a/51307381/2562870