System information
custom code: no, it is the one in https://www.tensorflow.org/get_started/estimator
system: Apple
OS: Mac OsX 10.13
TensorFlow version: 1.3.0
Python version: 3.6.3
GPU model: AMD FirePro D700 (actually, two such GPUs)
Describe the problem
Dear all,
I am running the simple iris program:
https://www.tensorflow.org/get_started/estimator
under python 3.6.3 and tensorflow 1.3.0.
The program executes correctly, apart from the very last part, i.e. the one related to the confusion matrix.
In fact, the result I get for the confusion matrix is:
New Samples, Class Predictions: [array([b'1'], dtype=object), array([b'2'], dtype=object)]
rather than the expected output:
New Samples, Class Predictions: [1 2]
Has anything about confusion matrix changed in the latest release?
If so, how should I modify that part of the code?
Thank you very much for your help!
Best regards
Ivan
Source code / logs
https://www.tensorflow.org/get_started/estimator
This looks like a numpy issue. array([b'1'], dtype=object) is one way numpy represents the string '1'.
Related
i git yolov7(https://github.com/WongKinYiu) with yolov7.pt and try to run
detect.py(i just want to run the example). it seems to be normal. but the output image has no mask.Why?
here is my code and log:
(PyTorch) E:\yolov7>python detect.py --weights yolov7.pt --source inference\images\bus.jpg
Namespace(weights=['yolov7.pt'], source='inference\\images\\bus.jpg', img_size=640, conf_thres=0.25, iou_thres=0.45, device='', view_img=False, save_txt=False, save_conf=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project='runs/detect', name='exp', exist_ok=False, no_trace=False)
YOLOR v0.1-103-g6ded32c torch 1.11.0 CUDA:0 (NVIDIA GeForce GTX 1650, 4095.6875MB)
Fusing layers...
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
Model Summary: 306 layers, 36905341 parameters, 6652669 gradients
Convert model to Traced-model...
traced_script_module saved!
model is traced!
E:\anaconda\envs\PyTorch\lib\site-packages\torch\functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Done. (151.6ms) Inference, (9.3ms) NMS
The image with the result is saved in: runs\detect\exp4\bus.jpg
Done. (3.713s)
and here is my result:output image
You set the argument classes=None.
The classes variable refers to a list of classes, where you define the index of the entities saved inside the weights you are referencing for the inference.
From detect.py:
parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
Since you told the model to check for zero classes, the model itself will not report anything.
I was also facing this issue. After downgrading cuda version to 10.2 my problem was solved. I used Cuda 10.2 with PyTorch 1.10.0 via pip installation. I hope it helps you too.
pip3 install torch==1.10.0+cu102 torchvision==0.11.1+cu102 torchaudio===0.10.0+cu102 -f https://download.pytorch.org/whl/cu102/torch_stable.html
Source of the answer: link
Since when you are working with GPU, It allows half precision by default, which you can change by editing your detect.py file.
Go to detect.py file and not exactly sure but on line 31, you will see this line of code:
half = device.type != 'cpu' # half precision only supported on CUDA
Replace that line with
half = False
and then save default.py file.
Now when you are using your detection command, make sure to use --device 0 that indicates your GPU must be utilize for detection.
python detect.py --weights yolov7.pt --device 0 --source inference\images\bus.jpg
Hi, I have two GPUs and sometimes I want to run one script on GPU:0 and the other one on GPU:1. the question is how can I execute a python script on a specific GPU, or how to bind script execution to a particular GPU. Looking ahead I'll say what I know about with tf.device('/device:GPU:1'):
I thought I can solve my issue using tf.config.experimental.set_visible_devices but I was wrong. The plan was simple: at the top of the code, I planned to set up the required GPU. And then, as I thought, the script will run on that GPU which I made visible. But I was wrong - see the code below. In my case when I run the script I got an error:
TensorFlow device (GPU:0) is being mapped to multiple CUDA devices (1 now, and 0 previously),
which is not supported. This may be the result of providing different GPU configurations
(ConfigProto.gpu_options, for example different visible_device_list) when creating multiple
Sessions in the same process. This is not currently supported, see https://github.com/tensorflow/tensorflow/issues/19083
__
import tensorflow as tf
def work():
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
for i in range(100000):
c = tf.matmul(a, b)
def main():
print(tf.__version__)
tf.config.set_soft_device_placement(False)
tf.debugging.set_log_device_placement(True)
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
print(f'set visible GPU device as {gpus[1]}')
tf.config.experimental.set_visible_devices(gpus[1], 'GPU')
except RuntimeError as e:
print(e)
device_name = tf.test.gpu_device_name()
print(device_name)
with tf.device('/device:GPU:1'):
work()
if __name__ == "__main__":
main()
I believe I'm not the first person who meets with this issue - could somebody share the solution?
as turn it out the casket opened simple - here is what I find out. When you have 2 GPU and call tf.config.experimental.set_visible_devices(gpus[1], 'GPU') they're only one GPU will be available for your script Device:1 (In my example) . BUT... IT IS IMPORTANT - the name(number) of this device will be Device:0 instead of Device:1 as it can be expected. And here is the reason: the system re-enumerates all available for your script GPUs beginning from the scratch - it means the first name of your device will be started from 0 again, despite your real device that has the name Device:1 - thus the available in your script device will have name Device:0, not Device:1.
I hope this can help someone like me.
I want to compare two images for similarity. Since my purpose is to match a given image against a massive collection of images, I want to run the comparisons on GPU.
I came across tf.image.ssim and tf.image.psnr functions but I am unable to find and working examples only. The solutions in PyTorch is also appreciated. Since I don't have a good understanding of CUDA and C language, I am hesitant to try kernels in PyCuda.
Will it be helpful in terms of processing if I read the entire image collection and store as Tensorflow Records for future processing?
Any guidance or solution, greatly appreciated. Thank you.
Edit:- I am matching images of same size only. I don't want to do mere histogram match. I want to do SSIM or PSNR implementation for image similarity. So, I am assuming it would be similar in color, content etc
Check out the example on the tensorflow doc page (link):
im1 = tf.decode_png('path/to/im1.png')
im2 = tf.decode_png('path/to/im2.png')
print(tf.image.ssim(im1, im2, max_val=255))
This should work on latest version of tensorflow. If you use older versions tf.image.ssim will return a tensor (print will not give you a value), but you can call .run() to evaluate it.
There is no implementation of PSNR or SSIM in PyTorch. You can either implement them yourself or use a third-party package, like piqa which I have developed.
Assuming you already have torch and torchvision installed, you can get it with
pip install piqa
Then for the image comparison
import torch
from torchvision import transforms
from PIL import Image
im1 = Image.open('path/to/im1.png')
im2 = Image.open('path/to/im2.png')
transform = transforms.ToTensor()
x = transform(im1).unsqueeze(0).cuda() # .cuda() for GPU
y = transform(im2).unsqueeze(0).cuda()
from piqa import PSNR, SSIM
psnr = PSNR()
ssim = SSIM().cuda()
print('PSNR:', psnr(x, y))
print('SSIM:', ssim(x, y))
I am using tensorflow version 1.3. But the tutorial that I following is written on the version 1.0 and I am quite new on tensorflow. The problem that I get is:
module' object has no attribute 'prepare_attention
And the code is ;
tf.contrib.seq2seq.prepare_attention(attention_states, attention_option = "bahdanau", num_units = decoder_cell.output_size)
I couldn't figure out what the use instead of tf.contrib.seq2seq.prepare_attention() function. Is there anyone who can help?
Degrade your tensorflow and it'll work. The problem is that prepare_attention is deprecated and hence we use an older version of tf to work with it
Okay, all you need to do is create a new environment with python 3.5.4 and then install tensorflow 1.0.0. That's it. Everything will work fine.
tf.contrib.seq2seq.prepare_attention works only when the TensorFlow version is 1.0, I have version 2.3.1
My solution:
tf.contrib.seq2seq.prepare_attention = tf.compat.v1.nn.rnn_cell.prepare_attention
Using statsmodel's GLM, the tweedie deviance is included in the summary function, but I don't know how to do this for xgboost. Reading the API didn't help either.
In Python this is how you do it. Suppose predictions is the result of your gradient boosted tree and real are the actual numbers. Then using statsmodels you would run this:
import statsmodels as sm
dev = sm.families.Tweedie(pow_var=1.5).deviance(predictions, real)