How to create a cloud9 SSH workspace with dreamhost VPS - ssh

I have already installed node.js(v0.10.30) and npm. I'm able to establish a SSH connection between my mac and dreamhost VPS via terminal, but i cant do it in Cloud9. Someone help me, please?

./server.js -p 8080 -l 0.0.0.0 -a :
--settings Settings file to use
--help Show command line options.
-t Start in test mode
-k Kill tmux server in test mode
-b Start the bridge server - to receive commands from the cli [default: false]
-w Workspace directory
--port Port
--debug Turn debugging on
--listen IP address of the server
--readonly Run in read only mode
--packed Whether to use the packed version.
--auth Basic Auth username:password
--collab Whether to enable collab.
--no-cache Don't use the cached version of CSS
So you can use your own VPS,just change 0.0.0.0 to your server ip.

Related

Configuring Container Registry in gitlab over http

I'm trying to configure Container Registry in gitlab installed on my Ubuntu machine.
I have Docker configured over http and it works, added insecure.
Gitlab is installed on the host http://5.121.32.5
external_url 'http://5.121.32.5'
In the gitlab.rb file, I have enabled the following settings:
registry_external_url 'http://5.121.32.5'
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_host'] = "5.121.32.5"
gitlab_rails['registry_port'] = "5005"
gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
To listen to the port, I created a file
sudo mkdir -p /etc/systemd/system/docker.service.d/
Here are its contents
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
But when the code runs in the gitlab-ci.yaml file
docker push ${MY_REGISTRY_PROJECT}:latest
then I get an error
Error response from daemon: Get "https://5.121.32.5:5005/v2/": dial tcp 5.121.32.5:5005: connect: connection refused
What is the problem? What did I miss?
And why is https specified here if I have http configured?
When you use docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY} the docker command defaults to HTTPS causing the problem.
You need to tell your GitLab Runner to use insecure registry:
On the server on which the GitLab Runner is running, add the following option to your docker launch arguments (for me I added it to the DOCKER_OPTS in /etc/default/docker and restarted the docker engine): --insecure-registry 172.30.100.15:5050, replacing the IP with your own insecure registry.
Source
Also, you may want to read more about it in this interesting discussion

How to connect to expo via private tunnel (not ngrok)

I have the problem that at work I can not connect via network to expo, so I need to use tunnel, which is fine. However sometimes the tunnel is really slow destroying any developer expierience.
Since I can also host expo locally on localhost I had the idea of simply ssh-tunneling to a remote server that has an open port.
my remote host runs ubuntu
so i SSH there like so:
ssh -R 0.0.0.0:19000:0.0.0.0:19000 user#ip
in order for this to work i also added
GatewayPorts clientspecified
to my /etc/ssh/sshd_config
...
sudo netstat -plutn
shows me
tcp 0 0 0.0.0.0:19000 0.0.0.0:* LISTEN 20183/2
so accepting requests (i also tried to forward port 19001 to get something back when i enter it in the browser which worke fine)
However when i enter:
exp://serverip:19000 into the expo client on my android phone he can't connect.
Any ideas on help?
It looks like Expo uses multiple ports 19000, 19001, and 19002. So you will need to forward all of these.
e.g.
$ ssh -f -N -R 19000:localhost:19000 user#ip
$ ssh -f -N -R 19001:localhost:19001 user#ip
$ ssh -f -N -R 19002:localhost:19002 user#ip
Also, you can set the REACT_NATIVE_PACKAGER_HOSTNAME environment variable to use the remote host.
$ export REACT_NATIVE_PACKAGER_HOSTNAME="ip"
$ expo start

docker: Says connection refused when attempting to connect to a published port

I'm a newbie at docker. I'm creating a Hello, World example. All I'm trying to do is bring up Apache in a docker and then view the default website from the host machine.
Dockerfile
FROM centos:latest
RUN yum install epel-release -y
RUN yum install wget -y
RUN yum install httpd -y
EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
And then I build it:
> docker build .
And then I tag it:
docker tag 17283f566320 my:apache
And then I run it:
> docker run -p 80:9191 my:apache
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
It then runs....
In another terminal window, I attempt to issue the curl command to view the default web site.
> curl -XGET http://0.0.0.0:9191
curl: (7) Failed to connect to 0.0.0.0 port 9191: Connection refused
> curl -XGET http://localhost:9191
curl: (7) Failed to connect to localhost port 9191: Connection refused
> curl -XGET http://127.0.0.1:9191
curl: (7) Failed to connect to 127.0.0.1 port 9191: Connection refused
or I try localhost
Just to make sure that I got the port correct, I run this:
> docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aed4063b1f6 my:apachep "/usr/sbin/httpd -D F" 43 seconds ago Up 42 seconds 80/tcp, 0.0.0.0:80->9191/tcp angry_hodgkin
Thanks to all. My ports were reversed:
> docker run -p 9191:80 my:apache
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command to view the default web site on your apache web server inside the container
curl http://192.168.99.100:9191
If you are running docker on Ubuntu machine as native you should be able to access your container with localhost.
If you are using Mac or Windows your docker container runs not on local host but on its IP. you can get your container ip with command docker inspect <container id> | grep IPAddress or if your are using docker-machine docker-machine ip <docker_machine_name>
Related info:
http://networkstatic.net/10-examples-of-how-to-get-docker-container-ip-address/
https://docs.docker.com/machine/reference/ip/
How to get a Docker container's IP address from the host?
so your curl call should be something like this curl <container_ip>:<container_exposed_port>
also you can tag your image on build command with param -t like this:
docker build -t my:image .
Another tip you can optimize your dockerfile by combining yum install commands like this:
RUN yum install -y \
epel-release \
wget \
httpd
http://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/

Vagrant startup with sshpass and port forward not working

I am trying to set up Vagrant for my development environment, but are having problems getting Vagrant to automatically connect to a remote server on startup
In my Vagrantfile I have the following line:
config.vm.provision "shell", path: "vagrant/startup.sh", run: "always"
In my startup.sh I have the following:
#!/usr/bin/env bash
sshpass -p '*******' ssh -fN -L 389:XXX.XXX.XXX.XXX:389 ******#********.*******.**.**
The provision runs on startup, but no ports are getting forwarded
If I SSH into the box and run the command, it just returns without any errors, but doesn't work. I can only get it to work if I don't use sshpass
P.S. Please don't tell me about the insecurity of sshpass, this is only used for LAN connections

How can I connect to a Google Compute Engine virtual server with a GUI?

I am testing a Google Compute Engine, and I created a VM with Ubuntu OS. When I connect to it, by clicking this Connect SSH button, it opens a console window.
Is that the connection you get?
How do I open a real screen with a GUI on it? I don't want the console.
Much better solution from Google themselves:
https://medium.com/google-cloud/linux-gui-on-the-google-cloud-platform-800719ab27c5
You need to forward the X11 session from the VM to your local machine. This has been covered in the Unix and Linux stack site before:
https://unix.stackexchange.com/questions/12755/how-to-forward-x-over-ssh-from-ubuntu-machine
Since you are connecting to a server that is expected to run compute tasks there may well be no X11 server installed on it. You may need to install X11 and similar. You can do that by following the instructions here:
https://help.ubuntu.com/community/ServerGUI
Since I have needed to do this recently, I am going to briefly write up the required changes here:
Configure the Server
$ sudo vim /etc/ssh/sshd_config
Ensure that X11Forwarding yes is present. Restart the ssh daemon if you change the settings:
$ sudo /etc/init.d/sshd restart
Configure the Client
$ vim ~/.ssh/config
Ensure that ForwardX11 yes is present for the host. For example:
Host example.com
ForwardX11 yes
Forwarding X11
$ ssh -X -C example.com
...
$ gedit example.txt
Trusted X11 Forwarding
http://dailypackage.fedorabook.com/index.php?/archives/48-Wednesday-Why-Trusted-and-Untrusted-X11-Forwarding-with-SSH.html
You may wish to enable trusted forwarding if applications have trouble with untrusted forwarding.
You can enable this permanently by using ForwardX11Trusted yes in the ~/.ssh/config file.
You can enable this for a single connection by using the -Y argument in place of the -X argument.
These instructions are for setting up Ubuntu 16.04 LTS with LXDE (I use SSH port forwarding instead of opening port 5901 in the VM instance firewall)
1. Build a new Ubuntu VM instance using the GCP Console
2. connect to your instance using google cloud shell
gcloud compute --project "project_name" ssh --zone "project_zone" "instance_name"
3. install the necessary packages
sudo apt update && sudo apt upgrade
sudo apt-get install xorg lxde vnc4server
4. setup vncserver (you will be asked to provide a password for the vncserver)
vncserver
sudo echo "lxpanel & /usr/bin/lxsession -s LXDE &" >> ~/.vnc/xstartup
6. Reboot your instance (this returns you to the Google cloud shell prompt)
sudo reboot
7. Use the google cloud shell download file facility to download the auto-generated private key stored at $HOME/.ssh/google_compute_engine and save it in your local machine*****
cloudshell download-files $HOME/.ssh/google_compute_engine
8. From your local machine SSH to your VM instance (forwarding port 5901) using your private key (downloaded at step 7)
ssh -L 5901:localhost:5901 -i "google_compute_engine" username#instance_external_ip -v -4
9. Run the vncserver in your VM instance
vncserver -geometry 1280x800
10. In your local machine's Remote Desktop Client (e.g. Remmina) set Server to localhost:5901 and Protocol to VNC
Note 1: to check if the vncserver is working ok use:
netstat -na | grep '[:.]5901'
tail -f /home/user_id/.vnc/instance-1:1.log
Note 2: to restart the vncserver use:
sudo vncserver -kill :1 && vncserver
***** When first connected via the Google cloud shell the public and private keys are auto-generated and stored in the cloud shell instance at $HOME/.ssh/
ls $HOME/.ssh/
google_compute_engine google_compute_engine.pub google_compute_known_hosts
The public key should be added to the home/*user_id*/.ssh/authorized_keys
in the VM instance (this is done automatically when you first SHH to the VM instance from the google cloud shell, i.e. in step 2)
you can confirm this in the instance metadata
Chrome Remote Desktop allows you to remotely access applications with a graphical user interface from a local computer or mobile device. For this approach, you don't need to open firewall ports, and you use your Google Account for authentication and authorization.
Check out this google tutorial to use it with Compute Engine : https://cloud.google.com/solutions/chrome-desktop-remote-on-compute-engine