How to save an installed package in google colab for instant access - google-colaboratory

So basically, I am installing a program/package as a root user using this command
wget -q -O ironfish.sh https://api.nodes.guru/ironfish.sh && chmod +x ironfish.sh && ./ironfish.sh && unalias ironfish 2>/dev/null
it asks for some details then it installs the package.
But, in colab you have to install this program every time again and again. Is there any way by which the package file gets stored in the colab using google drive? So that I don't have to reinstall the package to get quick access. I would appreciate any help you can give me.
wget -q -O ironfish.sh https://api.nodes.guru/ironfish.sh && chmod +x ironfish.sh && ./ironfish.sh && unalias ironfish 2>/dev/null
But, in colab you have to install this program every time again and again. Is there any way by which the package file gets stored in the colab using google drive? So that I don't have to reinstall the package to get quick access. I would appreciate any help you can give me.

Related

Is that possible to copy a folder without zipping from Google Cloud Storage to Google Colab?

Colab don't like to use a simple copy like this :
!gsutil cp gs://Bucket/Folder_to_be_copied Destination_colab
Should I add -r ?
Mounting the bucket to the colab makes it easier for you. You can use the regular linux commands in the place of gsutil commands. These are the steps for mounting the bucket in colab.
from google.colab import auth
auth.authenticate_user()
Once you run this, a link will be generated, you can click on it and get the signing in done.
!echo "deb http://packages.cloud.google.com/apt gcsfuse-bionic main" > /etc/apt/sources.list.d/gcsfuse.list
!curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
!apt -qq update
!apt -qq install gcsfuse
Use this to install gcsfuse on colab. Cloud Storage FUSE is an open source FUSE adapter that allows you to mount Cloud Storage buckets as file systems on Colab, Linux or macOS systems.
!mkdir folderOnColab
!gcsfuse folderOnBucket/content/folderOnColab
Use this to mount the directories.
You can use this docs for further reading. https://cloud.google.com/storage/docs/gcs-fuse
Answering you question, GSUtil can be used to copy entire directories without zipping its content. And your command looks fine. I wonder what the Destination_colab part in your command is.

Unable to run npm commands with sudo

I've recently installed the balena-cli package via npm (which itself has been installed using nvm) which works fine when accessed from my default user.
However whenever I try to access npm, nvm or balena-cli using sudo they all print the following error
$ sudo npm
sudo: npm: command not found
$ sudo nvm
sudo: nvm: command not found
$ sudo balena
sudo: balena: command not found
I tried using sudo chown on all 3 but to now avail.
Basically, none of the Node related functions can be accessed using root.
Any suggestions on how to resolve this, perhaps by the use of environment variables?
As a stop gap, I found out that running the same command after switching to root works fine with the following command;
$ sudo -s
However, since it's a stop gap, it would be great to find a way to run the same commands without switching back and forth between root.
Basically once do check that have you flashed Balena image correctly and then check the network permissions and login to Balena with root and run commands with su instead of sudo then your issue might clear.

Install dependencies from requirements.txt in Google Colaboratory

How to install python dependencies using a requirements file in Google Colab?
Like the equivalent of pip install -r requirements.txt
With Daniel's hint above, I was able to solve it.
Using the "uploading files from local computer script" I uploaded my requirements.txt file onto Google Colab platform. Script is found here. This is the script,
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
The output on executing, clearly says that, it is saving this file as 'requirements.txt'. I couldn't however find this file in Google Drive, which is fine by me. Then,
!pip install -r requirements.txt
worked!
You can write a file using the IPython magic function %%writefile then just do a !pip install -r requirements.txt
ie:
Cell 1
%%writefile requirements.txt
<contents of your requiements.txt>
Cell 2
!pip install -r requirements.txt
You Cannot find the Uploaded file in Google Drive. Because it is uploaded in the Python 3 Google Backend Compute Engine. When you Terminate the Session the all the files and data will be deleted.
try;
python -m pip install -r requirements.txt
U can use
pip install -r"path"

Download and install wget

I am new in using API to download data. I have to install wget on my Windows 10 64-bit machine but not sure how to proceed. Where to download and how to install it properly. Could you share with me the necessary steps?
Thanks
M
I installed wget on my command line for Windows 10 64-bit machine using the following steps:
Download the wget setup EXE here: https://sourceforge.net/projects/gnuwin32/files/wget/1.11.4-1/wget-1.11.4-1-setup.exe/download?use_mirror=excellmedia
Run the EXE, and accept all the defaults
Now you have the wget app that you access via your command line, as there is no GUI. You can access wget in two ways..1) cd to the directory, or 2) add it as an environment variable, so you can access it from any directory on the command prompt. I would recommend the second approach if you plan on using wget frequently:
Cd to Wget- copy the following and paste to your command line:
cd \Program Files (x86)\GnuWin32\bin
Once in this folder, your can now type wget and all of its functions
Add Environment Variable- instead of having to manually cd to the directory for wget, you can just add wget as an environment variable. First, open file explorer and paste the following into the directory:
Control Panel\System and Security\System
click Advanced System Settings -> click Environment Variables -> select ‘Path’ in the Environment Variables window -> click Edit -> click New -> click Browse -> then enter this location:
C:\Program Files (x86)\GnuWin32\bin
click Ok on each window to exit. Now close your terminal and open it again. You can now evoke wget from any directory on your command prompt, without having to manually cd into the directory that houses the wget app.
You can first make use of a bash console on your Windows 10.
Take a look at the instruction:
how-to-install-and-use-the-linux-bash-shell-on-windows-10
After enabling this feature you will be able to install wget from Ubuntu terminal using:
sudo apt-get install wget
Alternative solution found on superuser:
how-to-download-files-from-command-line-in-windows-like-wget-is-doing

Has anyone installed Scrapy under Canopy distro

I am new to Canopy. I have some data mining projects that I would like to do using Python. I was wondering if anyone was able to install Scrapy under Canopy? Is it easy to install packages outside of the main repository?
Short answer is to try pip install scrapy from the command line (use the Canopy Command Prompt (Windows) or Canopy Terminal (OSX/Linux) found in the Tools menu; this ensures Canopy's User Python is on the PATH).
See this article in the Enthought Knowledge Base about installing external packages into Canopy. It provides information on the steps required, including the use of pip, as well as other considerations you will want to be aware of when installing external packages.
1)sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 627220E7
2)echo 'deb http://archive.scrapy.org/ubuntu scrapy main' | sudo tee /etc/apt/sources.list.d/scrapy.list
3)sudo apt-get update && sudo apt-get install scrapy-0.24
these are the above three steps which will install scrappy
even i had the same problem , below link helps you on this
http://doc.scrapy.org/en/latest/topics/ubuntu.html#topics-ubuntu