Can anyone give me a quick and dirty tutorial about how to modify the code in the this MNIST tutorial and this seq2seq tutorial to log things to a log directory which can then be used in TensorBoard? I didn't really understand the ones on the official site.
You can create a SummaryWriter object, passing it the log directory, and call add_summary to log summaries and events to files in that directory. word2vec.py has an example. You can simply point TensorBoard at the log directory by passing it through --logdir and visualize the summaries.
Related
I have modified a notebook on the Kaggle kernel. The output file is saved in the /kaggle/working, but as I look into the files, the web page crashes every time. I tried this solution Download Kaggle output file as well but it did not work for me. Can anyone please help me with a guide to download these files? The code can be found here:
https://www.kaggle.com/code/ahmedtowsiftahmid/convert-mapillary-traffic-sign-annotations-to-yolo?scriptVersionId=117319238
I am trying to run the Tensorflow object detection notebook from the Google colab example that can be found here. I am running the example locally and when I try to create the exported model using the model.export(export_dir) function, the model is not saved in the referenced export_dir option. Instead, the code seems to ignore the option and saves the model in a temporary directory in /tmp/something. I tried to use both full and relative path but the model goes to /tmp. My environment is Ubuntu in WSL2 and I am using a /mnt/n drive.
Any idea how to fix the issue?
I am building a tensorflow_object_detection_api setup locally, and the hope is to transfer all the setup to a computer cluster I have access to through my school, and train the model there. The environment is hard to set up on the shared linux system on the cluster, so I am trying to do as much locally as possible, and hopefully just ship everything there and run the training command. My question is, can I generate the tfrecords locally and just transfer them to the cluster? I am asking this because I don't know how these records work,, do they include links to the actual local directories? or do they contain all the necessary information in them?
P.S. I tried to generate them on the cluster, but the environment is tricky to set up: tensorflow and opencv are installed in a singularity which needs to be called to run any script with tf or opencv, but that singularity does not have the necessary requirements to run the script which generates the tfrecords from csv annotations.
I am pertty new to most of this so any help is appreciated.
Yes. I tried it and it worked. Apparently, tfrecords contain all the images and their annotations; all I needed to do is transfer the weights to Colab and start training.
I am trying to download the Tensorflow's handpose model files for running offline.
So, i have downloaded the Handpose Model files from here.
https://tfhub.dev/mediapipe/tfjs-model/handskeleton/1/default/1
But, how can we use these files offline and predict in javascript and as well as on the react-Native code.
Just change all urls in the hanpose package to point to the url where you put your model ( in your localhost/public_dir)
that works well for me :)
I am trying to follow instructions here to generate models for android.
https://github.com/tensorflow/examples/tree/master/lite/examples/gesture_classification/ml
But when I try to run all commands in codelabs it asks for model.json and the binary weights file model-weights.bin files to upload. Not sure what this means.
If I skip this step second last step failes
No such file or directory: 'model.json'
Where can I find these?
You need to first use the gesture classification web app to generate the TensorFlow.js model trained by your gestures.
https://github.com/tensorflow/examples/tree/master/lite/examples/gesture_classification/web
Once the model is trained from the web app, you can download files like model.json and model-weights.bin which are needed in other steps.