Importing CSV file to practice Permutation Importance from Kaggle Competition - kaggle

I am trying to import a csv file that I downloaded from Kaggle. It is in myfinder on my mac and is saved as a '.csv' file. I typed my code in exactly how they do on Kaggle to get some muscle memory practice but I receive an error
This is my code:
This is my error:
I tried to setup an exercise for permutation importance and there seems to be a problem not sure what it is considering my code matches kaggle's exactly.

Related

Freeze Saved_Model.pb created from converted Keras H5 model

I am currently trying to train a custom model for use in Unity (Barracuda) for object detection and I am struggling near what I believe to be the last part of the pipeline. Following various tutorials and git-repos I have done the following...
Using Darknet, I have trained a custom-model using the Tiny-Yolov2 model. (model tested successfully on a webcam python script)
I have taken the final weights from that training and converted them
to a Keras (h5) file. (model tested successfully on a webcam python
script)
From Keras, I then use tf.save_model to turn it into a
save_model.pd.
From save_model.pd I then convert it using tf2onnx.convert to change
it to an onnx file.
Supposedly from there it can then work in one of a few Unity sample
projects...
...however, this project fails to read in the Unity Sample projects I've tried to use. From various posts it seems that I may need to use a 'frozen' save_model.pd before converting it to ONNX. However all the guides and python functions that seem to be used for freezing save_models require a lot more arguments than I have awareness of or data for after going through so many systems. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py - for example, after converting into Keras, I only get left with a h5 file, with no knowledge of what an input_graph_def, or output_node_names might refer to.
Additionally, for whatever reason, I cannot find any TF version (1 or 2) that can successfully run this python script using 'from tensorflow.python.checkpoint import checkpoint_management' it genuinely seems like it not longer exists.
I am not sure why I am going through all of these conversions and steps but every attempt to find a cleaner process between training and unity seemed to lead only to dead ends.
Any help or guidance on this topic would be sincerely appreciated, thank you.

TF Model Maker 'no such file or directory'

Im following a guide on using TFLite model maker using my own training data loaded from my google drive and keep getting the error "NotFoundError: /content/TFData/Train/img/img (73).jpg; No such file or directory" when trying to train with my data (please see below screenshot). Think i'm missing something obvious but cant seem to figure it out, apologies if this has been asked before, i'm somewhat new to working in this environment.
I have tried renaming all the images and maps and reshuffled the directory format to no avail.

Using custom StyleGAN2-ada network in GANSpace (.pkl to .pt conversion)

I trained a network using Nvdia's StyleGAN2-ada pytorch implementation. I now have a .pkl file. I would like to use the GANSpace code on my network. However, to use GANSpace with a custom model, you need to be able to give it a checkpoint to your model that should be uploaded somewhere (they suggest Google Drive)(checkpoint required in code here). I am not entirely sure how this works or why it works like this, but either way it seems I need a .pt file of my network, not a .pkl file, which is what I currently have.
I tried following this tutorial. It seems the GANSpace code actually provides a file (models/stylegan2/convert_weight.py) that can do this conversion. However, it seems the file convert_weight.py that was supposed to be there has been replaced by a link to a whole other repo. If I try run the convert_weight.py file as below, it gives me the following error
python content/stylegan2-pytorch/convert_weight.py --repo="content/stylegan2-pytorch/" "content/fruits2_output/00000-fruits2-auto1/network-snapshot-025000.pkl"
ModuleNotFoundError: No module named 'dnnlib'
This makes sense because there is no such dnnlib module. If I instead change it to look for the dnnlib module somewhere that does have it (here) like this
python content/stylegan2-pytorch/convert_weight.py --repo="content/stylegan2/" "content/fruits2_output/00000-fruits2-auto1/network-snapshot-025000.pkl"
it previously gave me an error saying TensorFlow had not been installed (which in all fairness it hadn't because I am using PyTorch), much like this error reported here. I then installed TensorFlow, but then it gives me this error.
ModuleNotFoundError: No module named 'torch_utils'
again the same as in the previous issue reported on github. After installed torch_utils I get the same error as SamTransformer (ModuleNotFoundError: No module named 'torch_utils.persistence'). The response was "convert_weight.py does not supports stylegan2-ada-pytorch".
There is a lot I am not sure about, like why I need to convert a .pkl file to .pt in the first place. A lot of the stuff seems to talk about converting Tensorflow models to Pytorch ones, but mine was done in Pytorch originally, so why do I need to convert it? I just need a way to upload my own network to use in GANSpace - I don't really mind how, so any suggestions would be much appreciated.
Long story short, the conversion script provided was to convert weights from the official Tensorflow implementation of StyleGAN2 into Pytorch. As you mentioned, you already have a model in Pytorch, so it's reasonable for the conversion script to not work.
Instead of StyleGAN2 you used StyleGAN2-Ada which isn't mentioned in the GANspace repo. Most probably it didn't exist by the time the GANspace repo was created. As far as I know, StyleGAN2-Ada uses the same architecture as StyleGAN2, so as long as you manually modify your pkl file into the required pt format,you should be able to continue setup.
Looking at the source code for converting to Pytorch, GANspace requires the pt file to be a dict with keys: ['g', 'g_ema', 'd', 'latent_avg']. StyleGAN2-Ada saves a pkl containing a dict with the following keys: ['G', 'G_ema', 'D', 'augment_pipe']. You might be able to get things to work by loading the contents of your pkl file and resaving them in pt using these keys.

Yolov3 not starting training

I am trying to train custom data set that consists of currency. i followed a youtube tutorial, made the same folder structure.
I am using google colab for free gpu and darknet. everytime i run data for training it finishes within seconds without any error and the final output says "608 x 608 create 6 permanent cpu threads"
the tutorial i followed shows the training of dataset but mine is keep getting stuck at this message.
I'm using yolov3 to train my dataset, followed every step of changing things in makefile. Also the train.txt and test.txt files stays empty too. (sorry for my bad english)
Below attached screenshot of the message i get when i try to train my model.
SOLVED : the issue was my train.txt file was empty because it wasn’t getting any image paths, soo i changed absolute path of my images folder to relative path and it saved all the images paths in train.txt file which resulted in activation of data training(sorry for my bad english)

saving RL agent by pickle, cannot save because of pickle.thread_RLock -- what is the source of this error?

I am trying to save my reinforcement learning agent class after training for further training later on by pickling it.
The script used is:
with open('agent.pickle','wb') as agent_file:
pickle.dump(agent,agent_file)
I am receiving an error:
TypeError: can't pickle _thread.RLock objects
I have searched this error message but not sure what the actual source of the error is. The traceback is uninformative with respect to specifically which line of code is causing this error. The scripts uses come from 3 independent .py files. A tensorflow, keras model has been built in one of them, but again unsure about where specifically this is coming from!I have read this error can come from lambda functions, but none of these are defined by myself, unless they are used internally byu a package such as tensorflow.
I too faced the same error but found a workaround. After model.fit(), use model.save("modelName"). It will make a folder inside which the model will be saved.
To load the model, use keras.models.load_model("modelName")