Putty-Python3-Jupyter notebook - virtual-machine

I need to run a Jupyter notebook 24/7 on a virtual environment. I've never done this before, and as a finance student I barely heard of this.
I have the putty connection, then the "something like cmd" opens, and HERE I am... (Really don't know where I am). There is also a virtual environment created (not by me), which has some capacity (like ram, storage, cpu, and stuff).
How do I open the jupyter notebook on the virtual environment?
Is there a way to do this or should I just delete all of this and run task scheduler?

PuTTY allows you to SSH to a server.
From this server's terminal session, you would need to run jupyter server (or jupyter lab). This would start the Jupyter Server/Lab web server on its default port. Read the documentation on running (and securing) a public server.
Using the same hostname/IP that you used in PuTTY, copy it, and open http://<that address>:<jupyter port> in a browser window, and you should be presented with a Jupyter login page where you'd need to copy a token that is generated when Jupyter is started (assuming you've not configured any other authentication methods).
To get Python code to run constantly, it would be recommended to just use cron with a regular Python script, and not use a notebook file. Otherwise, you'd probably want to look at another CLI tool called papermill, along with cron

Related

Way to pass parameters or share a directory/file to a qemu-kvm launched VM on Centos 7.0

I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom

jupyter server dfdata.to_clipboard from remote to local machine. how?

I have a dataframe say dfdata in a jupyter server notebook running on a remote machine).
I want to access the dataframe in the remote machine memory to my local machine, say to paste the dfdata to Excel.
Normally (when the notebook server is running locally), I do dfdata.to_clipboard() to copy the dataframe to clipboard and now able to paste it to Excel.
However, since the dfdata is now on the remote machine, the dfdata.to_clipboard() does not have the copy of the dataframe in clipboard.
How to make this work, i.e. copy paste dataframe from remote machine to locally running Excel, Textfile, etc.? Any alternative methods, if to_clipboard() by design will not work across remote server due to any security restrictions / limitations.
As an alternative you can use copydf - github and pypi.
This should be as simple as pip install copydf.
Then in your jupyter session:
from copydf import copyDF
copyDf( df )
I wrote a Jupyter nbextension to do this (installation instructions in the README).
It overwrites Panda's pandas.io.clipboard.copy & pandas.io.clipboard.clipboard_set (which is an embedded copy of pyperclip) to send messages to the Jupyter frontend via the Comm mechanism. I had to add a simple bootstrap-based UI that pops up when the client-side JS receives a message, as browsers won't let you push data onto the system clipboard without explicit user interaction (security!).
If you have another terminal session connected to the same remote server, you can try getting the clipboard system to output to the terminal.
I tried xclip -o in another ssh session (after calling df.to_clipboard() in the notebook) and it printed the contents of my DataFrame to the terminal. I was able to copy that text and paste it to Google Sheets successfully, split correctly into columns.
It may depend on what's installed on the server. There is apparently another clipboard system called xsel, but xclip worked for me on Ubuntu Server 16.04.

gcloud compute ssh connects shows wrong instance name

I'm pretty new to the Gcloud environment, but getting the hang of it.
Though with our first project live on an instance, I've been shuffeling some static IP's, instances and snapshots around for optimal deployment workflow. Though whats going on now, I can't understand;
I have two instances (i.e.) live-1 and dev-2.
Now I can connect to live-1 using gcloud compute ssh live-1 and it's okay.
When I try to connect to dev-2 using gcloud compute ssh dev-2, it logs me in to live-1.
The first time I tried to ssh to dev-2 it took longer than usual. After that it just connects me to the wrong instance immediately.
The goal was (as you might've guessed) to copy the live environment to a testing one. I did create an image of live-1, and cloned it to setup dev-2 with it. But in my earlier experience trying this, this was possible and worked as expected.
Whenever I use the Compute Console in the browser and use the online SSH tool from the instance list, it does connect to dev-2 properly. But on my local machine, using aformentioned command, connects me to live-1.
I already removed the IP for dev-2 from my known hosts, figuring it's cached somewhere, but no luck. What am I missing here?
Edit: I found out just now that the instances are separated though 'named' the same; if I login to dev-2, I do see myuser#live-1: in the shell, but it appears it is running a separate instance. I created a dummy file on the supposed dev-2, and it doesn't show up at the actual live-1 machine.
So this is very confusing; I rely on the 'user-tag' thing in front of every shell line to know where and what I'm actually working on; having two instances with the same name but different environments is confusing.
Ok, it was dead simple. Just run sudo hostname [desiredhostname] in the terminal, and restart it.
So in my case I logged in to dev-2 and ran sudo hostname dev-2.

Cloud9IDE ssh to own workspace - WITHOUT nodejs being installed

Quick summary:
Can I use cloud9 as an online shell terminal to connect to my own workspace (ec2 instance) WITHOUT having nodejs installed on that instance?
More details
I love the cloud9 online ide and am keen to use it for everything as I just have a chromebook. I just read about the new Ubuntu Snappy version of Ubuntu and wanted to launch an instance of it on amazon's ec2, ssh in, and play with it.
I can ssh in from my chromebook ok, but I'd like to know if there's a way to do this from cloud9? i.e. to use it is an online shell terminal, without first installing nodejs on the ec2 instance (which cloud9 as I understand it needs for the fancier ide features I could make do without for this use case.)
Thanks for the help in advance - first post on stackoverflow :)
Note: I'm a newish linux user. I've successfuly got cloud9's ide to work with a fresh ec2 regular ubuntu instace by connecting via ssh using my chromebook's crosh terminal and installing nodejs first, then switching to connect from cloud9 using the 'own ssh workspace' option. However I'm keen to see if I could have done this totally using cloud9 - ie used cloud9 like an online terminal to connect to the fresh ec2, then installed nodejs to turn on cloud9's fancy ide features. (or perhaps not install nodejs, and just use it as an online terminal e.g. to play with an image of ubuntu snappy quickly)
Unfortunately Cloud9 needs NodeJS on your server to work correctly. When you connect it to your workspace it should pop up with a prompt which after clicking next will automatically install all the dependencies Cloud9 needs to work.
(this is in response to your comment of 12/24)
You don't need Node installed on your Amazon server to make an ordinary ssh connnection. Perhaps you're copying the wrong key over: it's the one ending in .pub in ~/.ssh (e.g. id_rsa.pub).
Amazon has a help page for this process - basically you're adding the content of the public key on C9 to the file ~/.ssh/authorized_keys on your server:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html
Then, you'd ssh from C9 to your server like so:
ssh -i .ssh/<my public key> <myusername on amazon>#<amazon server IP>

Windows / Linux automatic key exchange

I have a build box, which I use to make continuous builds as well as run nightly unit tests. I'm using Jenkins to do by builds/unit test scripts, which is running on a windows box because our compiler is windows based.
One of our enterprise solutions uses Python code with rabbitmq for exchanging messages for syncing specific database tables over a faulty network. I have unit tests to help verify that updates are happening correctly.
In order to unit test the Python updates, I need to be able to stop some services running on my Linux box, then restart them after I update the python code. I setup a key exchange between my Windows box and Linux box, so that I don't have to put a password in the batch script.
When I'm remoted into the windows box, I can successfully run the batch file, which uses plink commands which rely on the key exchange and putty's pageant (which is running in the background). e.g. I use plink to execute commands on the Linux box from command line in my batch file. However, when I try to run the batch file from Jenkins, the batch file doesn't work properly because it is prompted for the SSH password when trying to run the plink commands.
I believe my current issue can be summarized by two issues, which I'm hoping can be verified and rectified:
I think Jenkins may be running as a different user or using different system credentials so it's not able to connect like the logged in user can. If this is the case, what would I need to do, to get it so that Jenkins can run the plink commands properly without being prompted for the password.
Pageant looks like it needs to get a password typed in every time the computer restarts. My research unearthed ways to put Pageant in startup, so you get prompted when you first login, but I need this to be automatic, like how I can on Linux boxes. If Windows reboots because of a Windows update, then the unit tests would fail as they won't be able to connect to the Linux server. Sure this only happens once a week, but over the course of a year it'll be very annoying.
What can I do to solve the above two issues? If there is a good alternative to putty for the automatic key exchange between Windows and Linux, I'd be interested in hearing about it (I would prefer to stay away from Cygwin with OpenSSH, but might go down this route if the above can't be rectified).
I use plink on my Windows Jenkins box to communicate with Linux on daily basis, there is no problem with it.
Like you theorized, Jenkins runs under it's own user (Windows default, I think, is SYSTEM user), which is different than your logged in session, even if you login as Administrator. Your authentication key is stored in your (Administrator or otherwise) profile directory
What you need to do is use Pageant to export your key as ppk file, then supply the path to this ppk file with plink:
plink -i "C:\path\to\id.ppk"
Looks like there is a simpler way to do what I'm trying to do, Jenkin's plugin https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin