Goal
I'd like to have a credentials file for things like API keys stored in a place where someone I've shared the Colab notebook with can't access the file or the information.
Situation
I'm calling several APIs in a Colab Notebook, and have multiple keys for different APIs. I'd prefer a simpler approach, if there are different levels of complexity.
Current attempts
I'm storing the keys in the main Python notebook, as I'm researching the best way to approach this. I'm pretty new at authentication, so would prefer a simpler solution. I haven't seen any articles addressing this directly.
Greatly appreciate any input on this.
You can store the credential files in your Google Drive.
Only you can access them at /content/drive/MyDrive/ after mounting it. Other people need their own credential files in their own Drive.
Related
I want to use google colab. But my data is pretty huge. So I want to access my data directly from the machine in google colab. And I also want to save the files directly in my machine directory. Is there a way I can do that as I can't seem to find any.
Look at how to use local runtime here.
https://research.google.com/colaboratory/local-runtimes.html
Otherwise, you can store your data on GDrive, GCS, or S3. Then, you can just mount it, no need to upload every time.
Is there a way to check how many people are using google colab at the same time you are?
I tried to look up via google and other sources and couldn't find any concrete information regarding the number of Users using the GPU at once.
No, there's no way to view overall Colab usage.
You could add analytics reporting to individual notebooks using Python APIs like this one:
https://developers.google.com/analytics/devguides/reporting/core/v4/quickstart/service-py
But, that would only report usage for users of a given notebook who execute code rather than users of Colab overall.
I mounted my google drive in my colab notebook, and I have a fairly big pandas dataframe and try to mydf.to_feather(path) where path is in my google drive. it is expected to be 100meg big and it is taking forever.
Is this to be expected? it seems the network link between colab and google drive is not great. Anyone know if the servers are in same region/zone?
I may need to change my workflow to avoid this. If you have any best practice or suggestion, pls let me know, anything short of going all GCP (which I expect don't have this kind of latency).
If you find calling df.to_feather("somewhere on your gdrive") from google colab and it is on the order of ~X00mb, you may find sporadic performance. It can take anywhere between a few min to a whole hour to save a file. I can't explain this behavior.
Workaround: First save to /content/, the colab's host machine's local dir. Then copy the file from /content to your gdrive mount dir. This seems to work much more consistently and faster for me. I just can't explain why .to_feather directly to gdrive suffer so much.
I am developing a Win8 Store app which allows users to download different types of files from an online learning platform and store them locally. I am also considering the function to help users organize these downloaded files by placing them in different folders (based on course name and etc.).
I was using Documents Library previously. But for every type of file that the user could download, I need to add a file type association, which does not make a lot of sense since my app would be able to open such files. So which local storage should my app use?
Many thanks in advance.
Kaizhi
The access to storage by Windows Store apps is quite restrictive, especially the DocumentsLibrary.
As you have noticed, you need to declare a file type association for every file type you want to read from or write to the DocumentsLibrary. This means your app need to handle file activations for these types in a meaningful way, which your app probably should not do.
But even if you jump through this hoop, there is another one that is not documented on the MSDN page of the DocumentsLibrary, but "hidden" in a lengthy page about app capability declarations: According to the current rules, you are not allowed to use the DocumentsLibrary for anything but offline access to SkyDrive! Bummer...
So what's left?
You can use SkyDrive or another cloud storage to put files in a well known place (which might or might not be somewhere on the hard disk). This is probably both overkill and undesirable in your case.
Or you save the files in the local app storage, provide your own in-app file browser and open the files with their default app. Seems viable to me.
Or, maybe, you can do something with share contracts or other contracts. I don't know much about these yet, but I doubt that they are helpful in your situation.
And that's it...
(Based on my current experience. No guaranty for correctness or completeness)
I just set up a IPython (0.13.1) Notebook server in order to use it during my introductory course on Python. The server is protected with password and is currently up and running.
Now, I need my students (about 15 people) to access the same ipynb document at the same time, play around with it and eventualy modify the code examples, while making sure anyone overwrites the uploaded version of the ipynb file.
How can I set this up?
First, take a look on teaching with ipython notebook. Try to list what type of applications you want to run on this. On the other hand, it possible to use some cloud computing resources, for example on Heroku.