Only able to use one session (non GPU) on google colab pro and it comes with 12gb of RAM despite "High Ram" - google-colaboratory

I subscribed to Colab Pro and I have always been able to get >1 instance of High Ram Non_GPU sessions. But today, I found this has been severely restricted.
I got "Too many session" popup when I tried a 2nd instance.
For the sole instance, I get only 12gb RAM even configuring for High RAM.
I am aware there had been pricing changes. Their provisioning policy is so vague that I am not sure if this is a violation of the terms. I have sent my "feedback" to alert them of the memory issue.
Anyone out there with similar experience? Colab has been one of the cheapest option despite awkwardness and inconvenience. But if it is no longer cost competitive, probably won't live with all the cons that go with it. I would greatly appreciate if someone can post other alternative (besides Kaggle).

Related

Google App Engine Latency skyrockets out of nowhere

Our development team is scratching our heads wondering why Google App Engine latency tends to go through the roof from time to time with very little predictability or warning. When latency jumps like this we start to see db timeouts between our app and database. The CPU util is pretty flat across the instances at this time which also makes this hard to understand.
We are using the Flex environment to host a .NET Core API. We like AppEngine for its PaaS feel and its always on feature. We are thinking about looking at Cloud Run as an alternative to test with since we can't figure this out.
Any suggestions on where to look or how we could troubleshoot this?
Here's the latest spike in latency from last night. Plenty of Cloud SQL db timeouts and other exceptions happening here due to this spike as well.
There are couple of reasons for this. This page has all the possible causes and the debugging steps. Mostly check your monitoring for memory usage, there might be a memory leak. Also, check for the autoscalar configuration.

Increasing Colab Pro runtime limit

I'm using Colab Pro and I have no issue with the RAM when I'm using either GPU or TPU. The only problem is that my running usually takes more than 12 hours and it looks like Colab automatically stops (with no error) after 12 hours. I've reached out to their support and got no response (this is strange enough for itself that how/why Google does not have proper support for paid Colab Pro technical, not billing, issues!) I wonder if there is any way that the runtime limit can be increased to more than 12 hours? Or, has anyone experienced a similar situation or has any contact I can reach out?
A similar situation is described here, but since there was no question asked, I'm creating this post.
Answering this question here: the runtime limit is just simply part of the Google Pro plan as also stated on their website. For now, I had to upgrade my plan to Google Colab Pro+ to be able to 1) have longer runtime and 2) have background execution when I close my browser.
Is this ideal? No! Do I hate to spend $50 a month on Google Pro+ as a student? Yes! But that is just how Colab Pro plans work. As the [wise] man says: "it is what it is!"

Could I retrieve the accessibility to Google Colaboratory after the usage limitation?

According to Google's FAQ webpage, Google often hampers heavy users from using GPU and TPU at Google Colaboratory. I think that I ran into trouble due to this reason. However, the document does not state whether users who Google regulated can reuse this service. If you know about the issue, please share your experience. Thank you.
For reference, I post my experience here. Though I could not use Google Colab at that time, the constraint was released after one day.

Services similar to S3/EC2

Does any other provider offer a cloud computing + storage layer like S3/EC2, with free data transfer between the two layers?
I have looked at:
Softlayer CloudLayer Storage -- no free transfer between the cloudlayer storage and cloudlayer computing instances.
Rackspace CloudFiles - Quite a bit of marketing mumbo-jumbo, and something about Cloud Connect, I gave up on the site once the Live Chat CSS Popup started following me around.
Does anyone know of any others?
I'm looking to store some large (non-random access) files for constant re-processing on a storage solution, and process it nearby, without paying transfer costs daily (looking to store in the 500-2000GB range, re-processing it all daily).
Re-processing requires a (Linux) server with a "decent" (weasel word alert) configuration.
Thanks!
'Cloud computing' is a bit of a myth.
They're all just, essentially, virtual private servers. 'Cloud' instances tend to have the flexibility to be billed by the hour, rather than monthy, but they're still just a VPS.
Persistent storage is a useful feature from a very limited number of VPS providers, but one that can easily be emulated by having two+ VPS' in the same data centre (Linode are an excellent VPS provider with free local data transfer, sadly they're rather limited by capacity). I don't know of any other VPS/Cloud providers who offer their own persistent storage solution.
It is something you can easily achieve yourself. VPS servers tend to be a little restrictive on hard drive capacity if you're looking for 500-2000GB, Perhaps you could consider a dedicated server and handle storage and processing on the same machine... you can't get data more local than the same machine!
First, the short version: stop looking for “free”.
Now, in more detail: you're looking to consume some somewhat-non-trivial computing, data storage and networking resources. Presumably you've got a good reason for doing this; if you truly have, you'll have the ability to also purchase the resources required for what you want to do. There are a few options on this front, none of which are free:
Buy and host your own hardware.
Buy the hardware and host it in a colocation facility.
Hire the hardware
Long term hire
Short term hire
All the Amazon are doing is short-term, easy set up hiring of resources. Their prices are quite keen (if some other option is cheaper, it's because it is missing something significant that Amazon do; maybe it's something you don't need but that's up to you to figure out). You can host the core of the Amazon API quite easily on whatever resources you've hired (see Eucalyptus) but be aware that going from having the software and the API to having everything work smoothly is a really big step; the more I work with Eucalyptus installations, the more impressed with Amazon I become. And that's despite being also pretty impressed with Eucalyptus itself.
But none of this is free. It takes real resources to provide – e.g., electricity to power the machines and keep them cool and a building to house them in – and ultimately, that's got to be paid for somewhere. To expect otherwise is to believe that others should have to pay for things for you; it's pretty rare that that happens, and the more you need to consume, the rarer it is (especially if the economy isn't doing too good). So stop thinking in terms of how you can get it for nothing (“freeload”) and instead take a good look at what it really costs to provide through various routes and seek to minimize your costs. If you can't afford even that, your #1 problem isn't hosting but funding; fix that first.
Rest assured you're not alone in this matter. This is what lots of other people worldwide have to do to make their projects into reality. Good luck!
GoGrid has an external storage with free transfer and access over typical protocols like SMB, NFS, rsync, FTP. The first two allows for mounting as normal drive.
Note also that many providers will allow you to create cloud servers with 2 TB instance storage. For sure not able to name all of them, but you can find some with cloudorado.com .

Permanent DOS Attacks - Anyone Knowledgeable?

So, I'm looking into Permanent DOS attacks for a class, and I'm having a hard time coming up with concrete examples. There's a lot of information about Phlashing (flashing firmware to either brick the device, or put malicious firmware in its place, for those of you who don't know the term) but I'd like to have a broader set of examples.
That being said, there has to be a way to write code that will do something like wear out disk arms, right? Something that will have the disk seek to the end of the disk, then back to the front, on and on. Anyone have an example of how that would be accomplished? Is there some way to specify where to track to on a disk in C (similar to traversing to a certain point in a file, but for the entire HDD!)? If not, I guess there's always trying to force a file's location on the disk... which seems like less fun trying to accomplish. Again, can you do something like that programmatically?
If anyone has any insight into these types of attacks, or any good resources for me to check into, I'd appreciate it. Maybe you read a story about it on Slashdot a few years back? Let me know! The more info I can gather, the less likely I'll be forced to kill time during my talk by bricking my router in the class :) I'm not made of money OR routers!
Seems like these would primarily be limited to physical attacks and social engineering ("To enable your computer's hidden turbo function, remove the cover and pry this part). But:
Adjust screen refresh rates to insane values to blow older CRTs
Monkey with ACPI fan, charge, or battery controls if possible to cause overheating or battery failure.
Overwrite every rewritable storage device of every kind attached to any bus. Discover and overwrite any IDE, USB, etc... device you know the flash updater details for.
Of course nothing is permanent. You can replace the hard drive, BIOS chips, CPU, motherboard, memory, etc...
Although it is mostly fictional, the halt and catch fire operation would be a very convenient and permanent DOS attack.
Steve Gibson (google his name) has a paper he wrote a few years back about protocol-level vulnerabilities in TCP/IP. Some of it is still pertinent today.
Socially engineer the power company or ISP to turn off service at the location in question.
Many devices in the computer today have their own firmwares, including but not limited to CPU, DVD, HDD, VGA, motherboard (BIOS) etc. Most of these devices also have a way of updating their respective firmwares. Which can also be used to brick them pretty efficiently. Although this does require an individual approach to every device, often using privileged instructions and undocumented interfaces.
It's possible for a virus to do this. I seem to recall an actual virus doing this back in the day, but can't find anything to back that up.
I was able to find an article where the author has a conversation with the VP from Western Digital wherein he states a program could potentially access a hard drive's firmware causing such a DOS attack:
There are back doors if you will that allow us to get into places that the operating system can't go through the IDE connector
There used to be a few viruses that could cause old CRT monitors to break. They could cause invalid sync signals out the VGA point that would be too high in frequency for the video sweep. I also remember a few that would use bad sector flagging to draw images on the old versions of Scandisk (we are talking early 90’s or older.) I don't remember and of the names or have any references, but they used to be quite annoying.
Fortunately better circuits, memory protection, API abstraction have made such attacked very difficult to impossible.