Cannot load image "./darknet/dataset/z115.jpg" STB Reason: can't fopen - yolo

I am training Darknet YOLO-V3 on a Google cloud, VM instance to detect the custom object.
The darknet directory contains the following:
dataset directory: It includes all the annotated data.
Annotated image
obj.data file: It contains
Obj data
obj.names file: It contains
obj names
train.txt file: It contains
train text file
When I am runing this command:
./darknet detector train obj.data yolov3-tiny.cfg darknet53.conv.74
The Error is generated:
Cannot load image "./darknet/dataset/z115.jpg" STB Reason: can't fopen

Related

When trying to convert StyleGan2 Pickle File, getting pickle.load() error: 'No module named 'torch_utils.persistence'

I trained a StyleGan2-ADA model on a custom dataset which generated a .pkl file. I'm now trying to load the .pkl file so that I can convert it to a .pt file, but when I load the .pkl file using:
pickle.load(f)
I'm getting a ModuleNotFoundError: No module named 'torch_utils.persistence'
I've installed torch_utils and other dependencies, but for loading the file I'm not sure how to fix this issue. If anyone has had this issue in loading a .pkl file any help would be greatly appreciated!!
Same issue on Github here, but no clear solution.
Have tried installing torch_utils multiple times, but error still persists

Is there any way to load FaceNet model as a tf.keras.layers.Layer using Tensorflow 2.3?

I want to use FaceNet as a embedding layer (which won't be trainable).
I tried loading FaceNet like so :
tf.keras.models.load_model('./path/tf_facenet')
where directory ./path/tf_facenet contains 4 files that can be downloaded at https://drive.google.com/file/d/0B5MzpY9kBtDVZ2RpVDYwWmxoSUk/edit
but a message error shows up :
OSError: SavedModel file does not exist at: ./path/tf_facenet/{saved_model.pbtxt|saved_model.pb}
And the h5 files downloaded from https://github.com/nyoki-mtl/keras-facenet doesn't seem to work either (they use tensorflow 1.3)
I had issued like you when load model facenet-keras. Maybe you python env missing h5py modules.
So you should install that conda install h5py
Hope you success!!!

Darknet demo needs opencv for webcam images OpenCV=1

I have been doing object detection with Yolo with the darknet git repository and wanted to start with video, I instaled OpenCV and try to run darknet for videos as:
$ ./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights <video file>
but I have the following message:
$ demo needs opencv for webcam images
I changed the second line in the makefile to:
OpenCV=1
but the message continue showing, and in the forums only. I followed a tutorial (https://pjreddie.com/darknet/install/#cuda), but I do not not what re-make the project is and the test at the end shows that it is not compiling with opencv. I tried to remake it with:
remake darknet
but I have the following error with the libraries:
include/darknet.h:11:30: fatal error: cuda_runtime.h: No such file or directory
compilation terminated.
Edit: I think is the instalation of CUDA as the comand nvidia-smi does not anything

loading tensorflow models in java

exportDir = "gs://testbucket/export/";
SavedModelBundle b = SavedModelBundle.load(exportDir, "serve");
Gives me the error :
org.tensorflow.TensorFlowException: SavedModel not found in export directory: gs://testbucket/export/
copying the saved_model.pb to a local directory and then providing path to the local filesystem works.
Where as
tf.saved_model.loader.load(session, [tf.saved_model.tag_constants.SERVING], export_dir)
This works with gcs bucket. Does any one know if loading models using svedmodelBundle api does not support gcs bucket ? How can i load saved_model.pb and variables from gcs bucket without copying them over to local filesystem in java
tf.saved_model.tag_constants.SERVING is the string 'serving_default'
Perhaps that's the problem?

How to use uploaded files in colab tensorflow?

I have uploaded my train.csv and valid.csv into colab using the files.upload() snippet:
User uploaded file "valid.txt" with length 3387762 bytes
User uploaded file "train.txt" with length 9401172 bytes
Running some tensorflow code that runs ok locally and fetches files in the current directory, causes the following error in Colab:
InvalidArgumentError: assertion failed: [string_input_producer requires a non-null input tensor]
[[Node: input_producer/Assert/Assert = Assert[T=[DT_STRING], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input_producer/Greater, input_producer/Assert/Assert/data_0)]]
I assume the code can't see the files? What's the path to the uploaded files?
Do the answers on this question help?
How to import and read a shelve or Numpy file in Google Colaboratory?
(The files.upload stores uploaded files in memory. To work with them as files on your filesystem, you'll need to save them explicitly.)