Colab: How to disconnect from session without closing the tab? - google-colaboratory

Some background
My computer fan goes crazy when I am using Google Colab, it definitely uses local resources somehow. I am running very long processes (over 4 hours). Yesterday, it occurred to me that I was disconnected, I thought my session had crashed since I stoped receiving the status updates of my task's progress bar. But then after clicking on Connect to a hosted runtime I was able to reconnect to that session and just interact with it fine. Given that Google Colab uses some of my local resource, I looking for a way to put the client application on halt for a little bit.
Question
How to manually disconnect from my remote session without crashing/terminating it? Is that even possible?
Note:
There is an answer for Does Google Colab stay connected when I close my browser? that says
The current cell will continue executing once you close your browser, but the outputs will not end up in the notebook in Drive.
I would be fine if I am able to leave the session running remotely but not being able to access the outputs on the notebook, given that I save the result on google drive when the process is done. So, not been able to see the output on the notebook would not be an issue for me.

Related

How to prevent Disconnecting from a runtime (Google Colab)

Im using Colab Free for a long time. My runtime gets disconnected every few minutes so I decided to make a research on Stackoverflow. I find some Chrome DevConsole (How to prevent Google Colab from disconnecting?) Codes and they were working until this day. Today It started to get disconnected again.
[Q] How can I keep my runtime alive?
You should move to an ec2 in amazon.

Unable to get a connection in Google Colab

I've been trying to connect to a hosted runtime on Google Colab as usual. Until now, it worked perfectly, but for some reason right now everytime I press 'CONNECT' it shows up 'No backends available' and doesn't connect to a run time. Tried it in incognito mode as well and also tried different browsers but still this problem persists.
Screenshot2
Screenshot1

SSH timeout when running importDump.php on a Bitnami Mediawiki instance on Google Cloud server

The import seems to start out ok, showing the contents of the mediawiki in the terminal window. At some point (often around the same point in the content), the SSH terminal freezes up. Opera Browser returns an 'out of memory' message.
2 questions -
Can I just start the import and ask the server to run it regardless of the status of the terminal window on my machine (or the internet connection)?
If no to #1, what can I modify to prevent the terminal from timing out?
It could be that the cause of the problem is not the Google Cloud server or the network but a problem with the client browser being used.
A good test would be to do the same operation using another browser if possible and see how it goes. If the operation is successful then it means that it is a problem with the Opera browser itself.
Also check the memory configurations on the client machine to see if it can handle the request.
There has been reports of out of memory errors in Opera:
https://forums.opera.com/topic/17877/new-version-out-of-memory-issue
If you have tried other browsers and issue is the same then it should not be caused by the Opera browser ‘out of memory’ error.
Have you provided your Bitnami Mediawiki deployment with the correct instance specs to handle every request?
At the Google Cloud Platform click on Products & Services which is the
icon with the four bars at the top left hand corner.
On the menu go to the Compute section and hover on ‘Compute Engine’ and
then click on ‘VM Instances’ to view all your instances.
Click on your Bitnami instance to see more details.
Go to ‘Machine type’ where you can see CPU and memory allocated for the
instance.
Ensure Bitnami Mediawiki instance has a good profile to handle the request.
You can also check the instance performance while you’re doing the import and see how it behaves.
As per the documentation, running importDump.php can take quite a long time. For a large Wikipedia dump with millions of pages, it may take days, even on a fast server.

How to get data out of citrix

Here's what I want to be able to do:
Run a program on my local computer which logs in to a citrix server (using citrix receiver or doing so in a similar way), on the server in the citrix session open a web browser, load a website, and then bring the html of that site back out of the citrix session and onto my local computer. Bascially I want to get data out of a citrix remote session.
How can I do this programmatically?
I'm fine with whatever programming language/modality you are comfortable in answering the question using.
I've looked a little into the citrix apis but while I find some things about logging in and even sending keystrokes and mouse clicks I found nothing about obtaining data. I could just log in and then use a program like wireshark to get the information, but I'm guessing it's all encrypted (plus then I wouldn't be doing my task all programmatically). I know of at least one open source program which seems to be able to replace a citrix reciever/client (openthinclient.org) but before I got digging through all its source code to try to answer my question I thought I'd ask here in case someone had an easier answer.
If all you want is to automate the task, is having the program act as a citrix client necessary?
I assume you don't have install privileges inside your citrix session, so are unable to install one of the many automation tools available (such as http://docs.seleniumhq.org/)?
Given the above...
If you have/allow java on your local machine, have a look at http://www.sikuli.org/
The main difference between this and other automation tools I've come across is that Sikuli uses the image on screen to navigate the gui, rather than grabbing calls to the widgets (which wont work in a citrix session).
So, assuming you can take a screenshot of your citrix session, it could be useful to you.

How to test a cocoa touch app for the case when the network fails while downloading a file?

My iOS application, among its features, download files from a specific server. This downloading occurs entirely in the background, while the user is working on the app. When a download is complete, the resource associated with the file appears on the app screen.
My users report some misbehavior about missing resources that I could not reproduce. Some side information leads me to suspect that the problem is caused by the download of the resource's file to be aborted mid-way. Then the app has a partially downloaded file that never gets completed.
To confirm the hypothesis, to make sure any fix works, and to test for such random network vanishing under my feet, I would like to simulate the loss of the network on my test environment: the test server is web sharing on my development Mac, the test device is the iOS simulator running on the same Mac.
Is there a more convenient way to do that, than manually turning web sharing off on a breakpoint?
Depending on how you're downloading your file, one possible option would be to set the callback delegate to null halfway through the download. It would still download the data, but your application would simply stop receiving callbacks. Although, I don't know if that's how the application would function if it truly dropped the connection.
Another option would be to temporarily point the download request at some random file on an external web server, then halfway though just disconnect your computer from the internet. I've done that to test network connectivity issues and it usually works. The interesting problem in your case is that you're downloading from your own computer, so disconnecting won't help. This would just be so you can determine the order of callbacks within the application when this happens, (does it make any callbacks at all? In what order?) so that you can simulate that behavior when actually pointed to your test server.
Combine both options together, I guess, to get the best solution.