Whats the difference between runtime and session in Google Colab?
I was using Colab recently and got confused between the following terms:
Kernel
Runtime
Session
I have researched the internet but couldn't get much help, so posting here
Runtime is the virtual machine allocated to you by Colab on temporary
basis with lifetime limited to 12 hours.
Session is http session accessing your Colab Runtime from browser. It
disconnects after some inactivity timeout but this does not affect
your Runtime - you can reconnect to it multiple times within the 12
hour window. The temporary files are stored in Runtime virtual
directory called "contents" and will persist from session to session
until your Runtime is reset.
Kernel is an operating system on which Runtime and all your commands
executed.
Related
I'm using Google Colab to run a server with ngrok and it's amazing, but every time I leave it disconnects and my server stops forever. It makes sense for that to happen but is there a way or a loophole? Is there a device I can keep this running on? I've used the while True:pass method and it works but requires me too keep the tab open, and I leave my computer closed a lot. Is there a web hosting service that can keep a webpage running on a server forever?
I suggest to look at this topic.
Also notice that with the free version of Colab, your maximum connection time is 12 hours, no matter what. If you pass to the PRO version, that should be extended to a maximum of 24 hours. Look here for more details
I I just started using Google Colab for a project of mine. I see an button of "CONNECT" on the web page that presents before me two options:
Connect to Hosted Runtime
Connect to Local Runtime
Can anyone explain what the two mean and how it may affect my project? I did not find any useful documentation related to it.
Hosted Runtime runs on a new machine instance in Google Cloud. You don't need to set-up any hardware. But you may need to install a few libraries every time you use it.
Local Runtime runs on your machine at home. You need to install Python, Jupyter, and set-up some forwarding. It is useful if you have a lot of data to process locally, or if you have your own powerful GPU to use.
In most cases, I use Hosted Runtime.
I am running Apache Guacamole on a Google Cloud Compute Engine f1-micro with CentOS 7 because it is free.
Guacamole runs fine for some time (an hour or so) then unexpectantly crashes. I get the ERR_CONNECTION_REFUSED error in Chrome and when running htop I can see that all of the tomcat processes have stopped. To get it running again I just have to restart tomcat.
I have a message saying "Instance "guac" is overutilized. Consider switching to the machine type: g1-small (1 vCPU, 1.7 GB memory)" in the compute engine console.
I have tried limiting the memory allocation to tomcat, but that didn't seem to work.
Any suggestions?
I think the reason for the ERR_CONNECTION_REFUSED is likely due to the VM instance falling short on resources and in order to keep the OS up, process manager shuts down some processes. SSH is one of those processes, and once you reboot the vm, resource will resume operation in full.
As per the "over-utilization" notification recommending g1-small (1 vCPU, 1.7 GB memory)", please note that, f1-micro is a shared-core micro machine type with 0.2 vCPU, 0.60 GB of memory, backed by a shared physical core and is only ideal for running smaller non-resource intensive applications..
Depending on your Tomcat configuration, also note that:
Connecting to a database is an intensive process.
Creating a Tomcat with Google Marketplace, the default VM setting is "VM instance: 1 vCPU + 3.75 GB memory (n1-standard-1) so upgrading to machine type: g1-small (1 vCPU, 1.7 GB memory) so should ideal in your case.
Why was g1 small machine type recommended. Please note that Compute Engine uses the same CPU utilization numbers reported on the Compute Engine dashboard to determine what recommendations to make. These numbers are based on the average utilization of your instances over 60-seconds intervals, so they do not capture short CPU usage spikes.
So, applications with short usage spikes might need to run on a larger machine type than the one recommended by Google, to accommodate these spikes"
In summary my suggestion would be to upgrade as recommended. Also note that, the rightsizing gives warnings when VM is underutilized or overutilized and in this case, it is recommending to increase your VM size due to overutilization and keep in mind that this is only a recommendation based on the available data.
My spark java application is running on remote machine in our internal lab. To analyse the memory consumption of remote application, attached the remote application pid to JProfiler by using the 'attach mode' (with help of jpenable) from my local machine.
After attaching the remote application to JProfiler in local machine, 'Allocation tree' is showing only non-array object allocations. I want to know array allocations also from my local machine.
Please help me to know about the array allocation with JProfiler.
Thanks,
Nagendra R
When you attach to a running JVM with the JProfiler GUI, the session startup dialog has an option "Record array allocations". It is not selected by default, because it requires a large amount of reinstrumentations which can be very slow.
If array allocation spots are important for your analysis, it's better not to use attach mode but to pass the VM parameter for profiling as given by
Session->Integration wizards->New remote integration
Then the instrumentation is done while the classes are loaded.
On my google VM, Google "automatically migrate an instance" on my account. After the instance was migrated it was shut down. This happened on two of my instances. I can no longer access my site. Can someone help me troubleshoot this? I've included screenshots to help illustrate my concern.
Compute Engine > Operations
After you click on the "automatically migrate an instance
hero-new-production" line item
Thank you for your help.
Have you set Automatic restart On option for your VM instance?
Are you using Local SSD or persistent disk for your VM? Local SSD data does not persist through instance termination. When the instance restarts, it creates a new Local SSD that you must format and mount.
This troubleshooting guide might be helpful