Write out file with google colab - tensorflow

Was there a way to write out files with google colab?
For example, if I use
import requests
r = requests.get(url)
Where will those files be stored? Can they be found?
And similarly, can I get the file I outputted via say tensorflow save function
saver=tf.Saver(....)
...
path = saver.save(sess, "./my_model.ckpt")
Thanks!

In your first example, the data is still in r.content. So you also need to save them first with open('data.dat', 'wb').write(r.content)
Then you can download them with files.download
from google.colab import files
files.download('data.dat')
Downloading your model is the same:
files.download('my_model.ckpt')

I found it is easier to first mount your Google drive to the non-persistent VM and then use os.chdir() to change your current working folder.
After doing this, you can do the exactly same stuff as in local machine.
I have a gist listing several ways to save and transfer files between Colab VM and Google drive, but I think mounting Google drive is the easiest approach.
For more details, please refer to mount_your_google_drive.md in this gist
https://gist.github.com/Joshua1989/dc7e60aa487430ea704a8cb3f2c5d6a6

Related

How to access data from machine using google colab

I want to use google colab. But my data is pretty huge. So I want to access my data directly from the machine in google colab. And I also want to save the files directly in my machine directory. Is there a way I can do that as I can't seem to find any.
Look at how to use local runtime here.
https://research.google.com/colaboratory/local-runtimes.html
Otherwise, you can store your data on GDrive, GCS, or S3. Then, you can just mount it, no need to upload every time.

Does google colab permanently change file

I am doing some data pre-processing on Google Colab and just wondering how it works with manipulating dataset. For example R does not change the original dataset until you use write.csv to export the changed dataset. Does it work similarly in colab? Thank you!
Until you explicitly save your changed data, e.g. using df.to_csv to the same file you read from, your changed dataset is not saved.
You must remember that due to inactivity (up to an hour or so), you colab session might expire and all progress be lost.
Update
To download a model, dataset or a big file from Google Drive, gdown command is already available
!gdown https://drive.google.com/uc?id=FILE_ID
Download your code from GitHub and run predictions using the model you already downloaded
!git clone https://USERNAME:PASSWORD#github.com/username/project.git
Write ! before a line of your code in colab and it would be treated as bash command. You can download files form internet using wget for example
!wget file_url
You can commit and push your updated code to GitHub etc. And updated dataset / model to Google Drive or Dropbox.

Taking forever to save a pandas dataframe from google colab session to my google drive

I mounted my google drive in my colab notebook, and I have a fairly big pandas dataframe and try to mydf.to_feather(path) where path is in my google drive. it is expected to be 100meg big and it is taking forever.
Is this to be expected? it seems the network link between colab and google drive is not great. Anyone know if the servers are in same region/zone?
I may need to change my workflow to avoid this. If you have any best practice or suggestion, pls let me know, anything short of going all GCP (which I expect don't have this kind of latency).
If you find calling df.to_feather("somewhere on your gdrive") from google colab and it is on the order of ~X00mb, you may find sporadic performance. It can take anywhere between a few min to a whole hour to save a file. I can't explain this behavior.
Workaround: First save to /content/, the colab's host machine's local dir. Then copy the file from /content to your gdrive mount dir. This seems to work much more consistently and faster for me. I just can't explain why .to_feather directly to gdrive suffer so much.

Export Excel files from Google Colab

I use following codes to save some data frame data, in Google Colab. But it is saved in the "local file system", not in my computer nor Google Drive. How can I get the Excel file from there?
Thanks!
writer = pd.ExcelWriter('hey.xlsx')
df.to_excel(writer)
writer.save()
from google.colab import files
files.download('result.csv')
Use Google Chrome. Firefox shows Network Error.
You'll want to upload the file using something like:
from google.colab import files
uploaded = files.upload()
Then, you can pick out the file data using something like uploaded['your_file_here.xls'].
There's a distinct question that includes recipes for working with Excel files in Drive that might be useful:
https://stackoverflow.com/a/47440841/8841057

Saving Variable state in Colaboratory

When I am running a Python Script in Colaboratory, it's running all previous code cell.
Is there any way by which previous cell state/output can be saved and I can directly run next cell after returning to the notebook.
The outputs of Colab cells shown in your browser are stored in notebook JSON saved to Drive. Those will persist.
If you want to save your Python variable state, you'll need to use something like pickle to save to a file and then save that file somewhere outside of the VM.
Of course, that's a bit a trouble. One way to make things easier is to use a FUSE filesystem to mount some persistant storage where you can easily save regular files but have them persist beyond the lifetime of the VM.
An example of using a Drive FUSE wrapper to do this is in this example notebook:
https://colab.research.google.com/notebook#fileId=1mhRDqCiFBL_Zy_LAcc9bM0Hqzd8BFQS3
This notebook shows the following:
Installing a Google Drive FUSE wrapper.
Authenticating and mounting a Google Drive backed filesystem.
Saving local Python variables using pickle as a file on Drive.
Loading the saved variables.
It'a a nope. As #Bob in this recent thread says: "VMs time out after a period of inactivity, so you'll want to structure your notebooks to install custom dependencies if needed."