I want to use google colab. But my data is pretty huge. So I want to access my data directly from the machine in google colab. And I also want to save the files directly in my machine directory. Is there a way I can do that as I can't seem to find any.
Look at how to use local runtime here.
https://research.google.com/colaboratory/local-runtimes.html
Otherwise, you can store your data on GDrive, GCS, or S3. Then, you can just mount it, no need to upload every time.
Related
Colab code runs on a temporarily allocated machine, thus the running environment is not aware of the notebook location on Google Drive.
I am wondering if there is an API which colab provides, which I may invoke programatically, which could tell me the location of colab notebook in Google Drive (I can manually do it by clicking on: file > Locate in drive, but I need to do this via code in Colab). This would be useful to save the data generated by my code.
Of course I can hard code this path (after mounting the gDrive), but each time, I would need to update it, and if I forget it can even overwrite the previous data. Is there a way where it could be automatically detected? It seems this solution: https://stackoverflow.com/a/71438046/1953366 is walking though every path until it matches the filename, which is not efficient and also will fail in case I have same file name at different location.
Any better solution(s)?
What's the best place to upload data for use in Google Colaboratory notebooks? I'm planning to make some notebooks that load netCDF data using python, and I'd like to be able to send the notebooks to other people and have them load the same data without difficulty.
I know I can load data from my own Google Drive, but if I sent other people the notebooks, then I'd have to send them the data files too, right?
Is it possible to have a central data repository that multiple people can load data from? The files that I'd like to use are ~10-100 MB. I only need to read data, not write it. Thanks!
My friend and I are working on a project together on Google Colab for which we require a dataset but we keep running into the same problem while uploading it.
What we're doing right now is uploading onto drive and giving each other access and then mounting gdrive each time. This becomes time consuming and irritating as we need to authorize and mount each time.
Is there a better way so that the we can upload the dataset to the home directory and directly access it each time? Or is that not possible because we're assessed a different machine(?) each time?
If you create a new notebook, you can set it to mount automatically, no need to authenticate every time.
See this demo.
Was there a way to write out files with google colab?
For example, if I use
import requests
r = requests.get(url)
Where will those files be stored? Can they be found?
And similarly, can I get the file I outputted via say tensorflow save function
saver=tf.Saver(....)
...
path = saver.save(sess, "./my_model.ckpt")
Thanks!
In your first example, the data is still in r.content. So you also need to save them first with open('data.dat', 'wb').write(r.content)
Then you can download them with files.download
from google.colab import files
files.download('data.dat')
Downloading your model is the same:
files.download('my_model.ckpt')
I found it is easier to first mount your Google drive to the non-persistent VM and then use os.chdir() to change your current working folder.
After doing this, you can do the exactly same stuff as in local machine.
I have a gist listing several ways to save and transfer files between Colab VM and Google drive, but I think mounting Google drive is the easiest approach.
For more details, please refer to mount_your_google_drive.md in this gist
https://gist.github.com/Joshua1989/dc7e60aa487430ea704a8cb3f2c5d6a6
When I am running a Python Script in Colaboratory, it's running all previous code cell.
Is there any way by which previous cell state/output can be saved and I can directly run next cell after returning to the notebook.
The outputs of Colab cells shown in your browser are stored in notebook JSON saved to Drive. Those will persist.
If you want to save your Python variable state, you'll need to use something like pickle to save to a file and then save that file somewhere outside of the VM.
Of course, that's a bit a trouble. One way to make things easier is to use a FUSE filesystem to mount some persistant storage where you can easily save regular files but have them persist beyond the lifetime of the VM.
An example of using a Drive FUSE wrapper to do this is in this example notebook:
https://colab.research.google.com/notebook#fileId=1mhRDqCiFBL_Zy_LAcc9bM0Hqzd8BFQS3
This notebook shows the following:
Installing a Google Drive FUSE wrapper.
Authenticating and mounting a Google Drive backed filesystem.
Saving local Python variables using pickle as a file on Drive.
Loading the saved variables.
It'a a nope. As #Bob in this recent thread says: "VMs time out after a period of inactivity, so you'll want to structure your notebooks to install custom dependencies if needed."