Cannot DataSet be exported TensorFlow CSV format - tensorflow

i prepared my dataset and created a version of it. Then i tried to export dataset in TensorFlow Object Detection CSV format but when i got output of given zip file. But i see that there is nothing inside the zip file except "README.roboflow.txt" and "README.Dataset.txt"
Is there anything i'm doing wrong or it's in process of development ? or it's a bug?
Thanks

I just tried this and was able to get my test, train, valid in the folder which also included the README.Dataset.txt and README.roboflow.txt.
Can you try again or share the email you used to build this project so one of us Roboflow staff members can take a look at it? You can feel free to dm it to me in our forum if it still doesn't work.
Kelly M., Developer Advocate

Related

Compile pbtxt into binarypb

I'm playing around with Mediapipe and I'm trying to better understand how the graph works and what is the input/output of the different calculators.
If I understand correctly, the .pbtxt files are just plain-text instructions that describe how each calculator should interact with the rest of the calculators. These files are compiled into .binarypb files, which are fed to Mediapipe.
For example, this .pbtxt file got compiled into this .binarypb file.
I have a few questions:
I saw https://viz.mediapipe.dev/ , which seems to be Mediapipe's playground. That playground seems to be compiling the text in the textarea on the right. If that is correct, how does it do it? Is there any documentation I can read about it? How are .pbtxt compiled into .binarypb?
I'm especially interested in the web capabilities of mediapipe and I'd like to create a small POC using both face-mesh and depth-to-iris features. Unfortunately, there isn't a "solution" for the second one, but there is a demo in Mediapipe's viz claiming depth-to-iris web support (the demo doesn't seem to be working correctly though). If I were able to create a .pbtxt with a pipeline containing the features that I'm interested into, how would I ¿compile? the .wasm and .data files required to deploy the code to the web?

where is tensorflow:Assets file location?

I recently created a prototype model in tensorflow, I'm asking if where is the saved model saved in my pc, its file location. saving my file outputted this info:
INFO:tensorflow:Assets written to: m_translator\assets
and I don't know what clearly this is hope there's someone who can explain this to me
From the info you have given, it says that the files are written to m_translator. Once check in your PC in the same directory where you are running your code if there is any folder named m_translator or check in the filepath you have provided while saving the model. Thank You.

How to quickly get code from Kaggle notebook without requiring registration?

I can already see three ways, but none are quick (not compared to say, accessing a raw file on github)
Fork/download (requires registration)
Follow instructions here (i.e. download, open up in jupyter/ipython notebook)
Copy the code blocks manually, one by one (bad for long notebooks)
Is there an easier way? (I hoping, ideally, to add raw to the url somewhere, just like on github)
In case it's useful to others, put the notebook url here to extract raw code

SVHN Data questions

I have been reading and working on SO questions related to the Street View House Numbers (SVHN) datasets. The files are available at 2 different locations:
Stanford:
The Street View House Numbers (SVHN) Dataset
kaggle:
Street View House Numbers (SVHN) | Kaggle
My question is related to the format of the digitStruct.mat files for each image set (train, test, and extras). These define the name, label, and bounding box dimensions for each image. As I understand, the mat file is written as a Matlab structure in HDF5 format (that can be read with h5py).
I have been able to access and read the digitStruct.mat files from kaggle with h5py. I cannot open the same files from the Stanford site with h5py (or with HDFView). Some SO posts I've read indicate the Stanford files are an older Matlab format and should be read with scipy.io.loadmat.
Are the files at Stanford and kaggle the same?
If not, what are the differences?
Should I be able to open the Stanford digitStruct.mat files with h5py?
If so, what method should I use to download and extract the Standford tar.gz files? (FYI, I'm on Win-7, and have been using HTTP download and WinZip to extract.)
I am adding additional info to document different behavior observed with different .mat files. It may help with diagnosis.
I can open and operate on .mat files from kaggle with this call:
h5f = h5py.File('digitStruct.mat','r')
For files from Stanford, I get different errors depending on the file and function used to open.
The command below executes without an error message. That leads me to believe it is not a Matlab v7.3 file that can be opened with h5py.
mat = scipy.io.loadmat('./Stanford/test_32x32.mat')
Both of these calls do not work (brief error message provided):
mat = scipy.io.loadmat('./test/digitStruct.mat')
Traceback...
NotImplementedError: Please use HDF reader for matlab v7.3 files
h5f = h5py.File('./test/digitStruct.mat','r')
Traceback...
OSError: Unable to open file (file signature not found)
In addition, I cannot open test/digitStruct.mat with HDFView. My conclusion for the Stanford digitStruct.mat files: they might be Matlab v7.3 files, but were corrupted when I downloaded. However, I'm not sure what I did wrong (since I can download and read kaggle files without problems).
With some Linux detective work, I figured out the problem.
As I suspected, the digitStruct.mat files extracted from the *.tar.gz files on the Stanford site are HDF5 (Matlab v7.3) files, and were corrupted when I downloaded.
To confirm, I downloaded the 3 tar.gz files with a browser on a Linux system, then used the tar command to extract them, and successfully opened with h5py on Linux. I then transferred them to my Windows system, and each worked as expected with h5py.
This is a little surprising, as I have used WinZip to extract tarball files in the past. Apparently there's something special about these that caused the corruption.
Hopefully this saves someone the same headache in the future.
Note: the 3 xxxx_32x32.mat files are an older Matlab format that must be accessed with scipy.io.loadmat()

How to only detect humans in object detection API Tensorflow

I am using tensorflow object detection API to detect objects. It is working fine in my windows system. How can I make changes in it to only detect mentioned objects, for example, I only want to detect humans and not all the objects.
As per the 1st comment in this answer, I have checked the visualization file but didn't find anything related to categories of objects. Then I looked into category_util.py and found out that there is csv file from which all the categories are being loaded but didnt found this csv file in the project. Can anyone please point me into the right direction. Thanks
I assume from your question, that you did not finetune your model yourself, but just used a pretrained one from the model zoo!?
In this case, I think the model already detect humans AND other objects and you want these other objects to disappear!? For doing so, you just have to change your label_map.pbtxt by deleting all classes which you don't need. If you are not sure where to find this file have a look into your .config file and search for label_map_path="PATH".