How can TensorFlow/Keras's default tmp directory be changed? - tensorflow

I'm working on Ubuntu 18.04 LTS. According to https://keras.io/getting_started/faq/:
The default directory where all Keras data is stored is:
$HOME/.keras/
In case Keras cannot create the above directory (e.g. due to
permission issues), /tmp/.keras/ is used as a backup.
In my case, Keras lacks permissions for $HOME/.keras and can't write to /tmp/.keras due to a full hard drive. I would like to place Keras's tmp directory in a custom location on a different hard drive. How is this best accomplished?

Related

Exported object detection model not saved in referenced directory

I am trying to run the Tensorflow object detection notebook from the Google colab example that can be found here. I am running the example locally and when I try to create the exported model using the model.export(export_dir) function, the model is not saved in the referenced export_dir option. Instead, the code seems to ignore the option and saves the model in a temporary directory in /tmp/something. I tried to use both full and relative path but the model goes to /tmp. My environment is Ubuntu in WSL2 and I am using a /mnt/n drive.
Any idea how to fix the issue?

How to recover home folder? Cloned partition recovered other directories like /etc/ but not /home

I have two linux partitions on my laptop (one ubuntu and one garuda). Ubuntu was giving me problems so I installed Garuda to check it out. The Garuda partition filled up so I used KDE partition manager to shrink the ubuntu partition so I could expand the Garuda.
Then, Ubuntu wouldn't mount and would not boot as it said the fs was wrong size. I ran fsck on the partition and hit yes to pretty much everything. This included force rewriting blocks it said it couldn't reach and removing inodes, etc. Probably a mistake in hindsight.
Now, I got a external hard drive and cloned the Ubuntu partition using "sudo dd if=/dev/nvme0n1p5 of=/dev/sda1 conv=noerror,sync". The external hard drive mounted without problem but it does not have /home/ folder, only folders such as /etc/.
I don't think there's many files I cant get back from a git repo, but it would be nice to have access to the /home folder so I can grab everything, remove the ubuntu partition, and resize garuda.
Thanks in advance!
I figured it out. I kind of followed https://unix.stackexchange.com/questions/129322/file-missing-after-fsck but
I copied the partition to an external drive using dd. Then mounted the external drive (which just worked even though I could not mount the original ubuntu partition). Then I went into the lost+found folder on the partition and used "find" to search for a file I know I had in my home folder and it found that file. I am not able to access all my documents etc.

Could I transfer my tfrecords and use them in another computer?

I am building a tensorflow_object_detection_api setup locally, and the hope is to transfer all the setup to a computer cluster I have access to through my school, and train the model there. The environment is hard to set up on the shared linux system on the cluster, so I am trying to do as much locally as possible, and hopefully just ship everything there and run the training command. My question is, can I generate the tfrecords locally and just transfer them to the cluster? I am asking this because I don't know how these records work,, do they include links to the actual local directories? or do they contain all the necessary information in them?
P.S. I tried to generate them on the cluster, but the environment is tricky to set up: tensorflow and opencv are installed in a singularity which needs to be called to run any script with tf or opencv, but that singularity does not have the necessary requirements to run the script which generates the tfrecords from csv annotations.
I am pertty new to most of this so any help is appreciated.
Yes. I tried it and it worked. Apparently, tfrecords contain all the images and their annotations; all I needed to do is transfer the weights to Colab and start training.

How I export the models trained in google cloud to my pc?

I trained my convolutional neural network implemented in tensorflow in google cloud, but now how do I export the model in "storage in google cloud" to my PC?
I want to download the model that was trained, to use it to make predictions
I have it like this
If your files are already in a bucket you can download them from the web interface by either clicking one by one and saving on your hard drive or you can use
gsutil cp *.* gs://adeepdetectortraining-mlengine/prueba_27_BASIC_GPU
check here
To download recursively all the files and folders inside a bucket, I suggest you to use the gsutil command:
gsutil cp -r gs://<BUCKET_NAME>/<FOLDER>
The above command will download recursively the files and folders inside "FOLDER" on the current path for the local machine, this is possible due to the "-r" option is used.
You would be able to add a destination folder following the gsutil command documentation:
gsutil cp [OPTIONS] src_url dst_url
Where:
src_url is the path for your bucket.
dst_url is the path for your local machine.
Consider that above commands should be executed on your local machine with Cloud SDK installed and configured, this will allow you to copy your files into your local machine.

How to pass dataset directory in google datalab

I have setup google datalab on my local machine then tried to read dataset using pandas by passing custom data directory but it doesn't take path correctly it adds root at the start then try to pass absolute path then it gives error file doesn't exist but file is in the directory.
Can any one help me what is the issue or anyone explain me is there any predefined rule to put the dataset
If you ran Datalab as a docker container on your local machine, it automatically maps your home directory to /content inside the container, so just make sure your Dataset is accessible from your home path.