Can I run my own instance of google colab? - google-colaboratory

I would like to understand how feasible it would be to spin up my own instance of a Colaboratory server that I could run within a closed network. Using the public version is unfortunately not yet an option in my company. I would really like to have something equivalent that I could use internally, which has all of the nice features such as collaborative editing.
Has anyone tried doing this? Is it even possible?

There's no way to spin up a full instance of the Colab service; i.e., the bits that integrate with GSuite / Docs / GCP / TPUs.
But, you can run local backends using the instructions here:
http://research.google.com/colaboratory/local-runtimes.html

Related

Use Google Colab Resources on local IDE

I have a big doubt... is see a lot of blog posts where they say that you can use the Colab front-end to edit a local Jupiter Notebook
However I don't see the point... the actual advantage would be to use something like DataSpell or some local IDE, on a remote Notebook on Colab, and use the Colab Resources to do the computations, so you have:
IDE level of suggestions (Colab is pretty slow compared to local IDE)
cloud computing performances and advantages
Hoever, I don't see any blog talking about this... is there any way to do this?

Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?

I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo

Does Google Colab uses local resources too? How can I stop that?

I noticed that whenever I open a Google Colab notebook my system fans go high and all of my 4 cores show huge usage (on my ubuntu laptop). Clearly a lot of JS is running on my system.
However, when I host a Jupiter notebook on another machine and use that from my laptop, all the resource usage is normal.
Q: Is there a way to make Google Colab use minimal resources of my PC?
While google colab is an awesome way to share my code (and ask questions), the sound from fan speed annoys me a lot.
p.s; If this is not th right plac to ask this, kindly let me know where can I ask it?
Check if your Google Colab is running in local runtime. By default, it runs on its own Compute Engine, but you do have the option to alter it.
P.S It could also be Google Chrome simply using too many resources when running Colab. Try Edge or other lesser power-hungry browsers.

Is there a way to connect google colab to my google drive for good?

I found this great question here:
https://stackoverflow.com/questions/48376580/google-colab-how-to-read-data-from-my-google-drive
which helped me to connect the colab to my drive
Here it is as well:
from google.colab import drive
drive.mount('/content/gdrive')
My question: Is there anyway to do this process of the google authentication only once? Colab disconnects from time to time if not in use and than I need to restart the authentication process.
Thanks
Authentication is done per machine; exchanging keys to access drive. Since you always get a new machine on re-connect, you need to re-authenticate.
However, another option is to use an API key for your google drive access. This can be done via theGoogle API Console for the Drive Platform. Essentially you would have one API Token you can over and over again; possibly leading you to store it inside the notebook... where the bad part starts.
If you opt-in on using a token to "manually" mount the drive folder, as soon someone gets a hand on this token (i.e. sharing your notebook, man in the middle, forgetting to delete the key), your drive folder is compromised. That is the reason why my formal answer to this question is: No, you can't.
But since colab provides the whole machine with a unix environment where you can execute arbitrary bash commands, you are in control and leave you with additional resources for further investigation:
https://stackoverflow.com/a/50888878/2763239
https://medium.com/#uditsaini/access-google-drive-and-mount-google-drive-to-colab-notebook-google-ccbca1691d31
https://github.com/googlecolab/colabtools/issues/121#issuecomment-423326300
A recently released feature makes this much simpler. The details are described in this answer:
https://stackoverflow.com/a/60103029/8841057
The short version is that for notebooks in Drive that aren't shared, there's now a GUI option to mount Drive files automatically for a given notebook.

How to create a VM using VSphere java API?

I want to write some java code to create a VM, install iso (or copy the existing vm set up if install iso is not possible) and assign disk space, create login for the created VM.
I looked at Vsphere API examples in http://vijava.svn.sourceforge.net/viewvc/vijava/trunk/src/com/vmware/vim25/mo/samples/, it has power on/off of installed VM. I could not figure out how to create one with the API. I have two questions:
What are the steps to create VM using API?
What API or objects should be used to create VM programatically?
Appreciate your help.
I know i am about a year late, but when you download the SDK, you will have a sample how to create a VMdisk. Understand the code and then you can just do it your way :)
The link to the SDK.zip file
http://communities.vmware.com/community/vmtn/developer/forums/java_toolkit
and inside the SDk, the VMDisk file:
\SDK\vsphere-ws\java\JAXWS\samples\com\vmware\vm
You'll want to keep the VMware Web Services SDK documentation handy - unfortunately they changed formats recently so I'm not sure how good of deep links I can get for you. The specific method I've used is CreateVM_Task (you'll have to scroll down to find it on the Folder object). Alternatively, if you're using a resource pool, CreateChildVM_Task may be more applicable (again, scroll down to find it).
There is also a section of documentation on creating VMs that has some incomplete example code.
As far as where in the hierarchy to create the VM, that's up to you. Each host or cluster will have a vmfolder property that you can use to create VMs, or any other folder may work. Good luck!