Connecting through VNC to an Shared Image from another Google Cloud Project - ssh

I am recently working on a project in Google Cloud Compute Engine, and I have created an image of my current work. In this instance I can access using VNC Viewer to a GUI like of the instance by using the public IP and the port :5901 .
I gave the permissions to another person to use this image in his own proyect, but although he can start the instance created from my image, he cant access using VNC and the public IP (timeout problem).
I guess it has to do with anything related to the SSH protocol, but I dont know exactly how.
Does anyone has an idea of how to solve this issue?

I found the asnwer in a youtube video: Here, (10:52) I just had to habilitate the tcp ports on my project. Thanks!
I had to go to firewall configuration, add firewall a rule that habilitates all tcp ports from 0.0.0.0/0 for incoming connections in all instances since I'll be using more than one instance in my project.
Honestly I don't know if I did "overdo" anything, but it worked for me.

Related

Cross module direct communication (via IP / sockets)

If we have two custom modules that need to communicate directly via sockets, is there a way to know what the IP address assigned to each module?
After reading this article I was under the impression the azure-iot-edge network bridge would possibly support referencing the running module by the module name as the hostname. This doesn't seem to work.
I guess we are trying to avoid having to scan the network or use some local storage option and don't want to join the host network so any ideas how one module that is running can find the IP of another module that is expected to be running?
Here is a picture showing the two containers I am testing with. The one container is just an alpine instance that I can attach the console to and use to try to ping / access other containers. I can ping by IP address but want to ping by container name instead.
After further study of this issue, it turns out the issue was the arm32v7 arm image I was using when deployed had some issues. One of the oddities was that the date on the container was "Sun Jan 0 00:100:4174038 1900" and there were some other commands failing that should have worked.
I ended up switching over to an ubuntu image with iputils-ping installed and confirmed that the azuire-iot-edge bridge allows accessing other containers by their module name which servers as the host name, so all good here, works as expected, user error!

Automate shadowsocks proxy server setup

I am going to a highly Internet censorship country which blocked Youtube or Gmail or twitter. So I decide to setup a shadowsocks proxy server, on a Raspberry PI and give it to my friend who live in a low censorship area. I use her internet to visit my Gmail. Unfortunately my friend is totally computer illiterate, and she often move house. That means I need to automatic every network config of the pi.
That means the Raspi should automatically recognize new network and initialize the server. So here is my plan:
Every time power up the Raspi, auto recognize IP and auto send to my safe email.
The Raspi is probably under a local area network. NaT(Network Address Translation) or frp ( fast reverse proxy) should expose Raspi to public internet. Then I can find my Raspi.
Setup shadowshocks server on my Raspi, and it can change the server configuration automatically.
Then Raspi would automate deploy to the new network. I only visit my safe email and change my shadowsocks client config.
1. Is this a feasible plan?
2. I fininshed step 1, and blocked at step 2. I need help to solve step 2,3. Please give me some course or plan
Thank you for your time and any comment will be welcome.
A problem I see in your plan is in step 2:
Normally it requires setting up NAT on the wifi router to make the Shadowsocks user port open to the WLAN side. And this could be difficult to be automated on the raspberry pi, especially the wifi routers may be unpredictable various.
About step 3, it shouldn't be a problem, the service should listen on a private IP address which has nothing to do with your client setting, and the port could be fixed, all you want to do is to supervise your service to make sure it stays strong.
I would recommend setting up VPN services in a cloud environment if the costs are not a big problem. AWS has a one-year free tier plan that may be useful. Take a look at this project, aws-cfn-vpn, it should provide you a solution and keep your hands clean as much as possible.

Port forwarding using Apache and a Virgin Media Super Hub 2?

I have created a website which i'd like to host on my own Web Server, to do this I've installed Raspbian on my Raspberry Pi and loaded Apache and configured it correctly (if i hit my IP i get the index page)
However, i'm having issues port fowarding on my Virgin Media Super Hub 2 and i'm struggling to find any steps on how to set this up correctly and what address i need to hit post port forward? Any suggestions?
I also had this issue. Probably you changed the dhcp pool in order to configure and use some static IP (and you did it right) but it seems that this crappy hub doesn't like to do portforward to ips out of the hdcp lease.
Solution in my case: get the DHCP config back to the original range of 253 hosts and, to make sure you don't have your static ip assigned by DHCP you can make reservations adding the mac address.
I was not able to find this solution online, so I hope this help someone!

SSH access behind router without port forwarding

I'm trying to SSH between two computers behind router without port forwarding at least on one end, which is the computer I'm trying to access.
Now. seems like this got something to do with SSH tunneling and I've been trying to achieve something but unfortunately I'm not getting there.
My main purpose is to make a website that will have full access to my computer that's behind the router and be able to control that computer from the website.
Now question is: is this even possible? I tried to use AWS since I get a public IP that will help me with the port forwarding issue on the computer behind the router but no luck too.
I would appreciate some help or suggestions on how to do that.
I think that is possible.
Take a look in Guacamole.
Guacamole is a clientless remote desktop gateway. It supports standard protocols like VNC and RDP.

How can I make Apache on an amazon ec2 linux box using the elastic IP instead of the private IP?

I've migrated a website to Amazon ec2 that hooks into a service we are using that is installed on another server (not on Amazon). Access to the API for that service is IP-restricted and done by sending XML data using *http_build_query* & *stream_context_create* in PHP.
If I want to connect to the service from a new server, I need to ask the vendor to add the new IP first. I did that by sending the Elastic IP to them, but it doesn't work.
While trying to debug, I noticed that the output for $_SERVER['SERVER_ADDR'] is the private IP of the ec2 instance.
I assume that the server on the other side is receiving the same data, so it tries to authenticate the private IP.
I've asked the vendor to allow access from the private IP as well – it's not implemented yet, so I'm not sure if that solves the problem, but as far as I understand the way their API works, it will then try to parse data back to the IP it was contacted from, which shouldn't be possible because the server is outside the Amazon cloud.
I might miss something really obvious here. I added a command to rc.local (running CENT OS on my ec2 instance) that associates the elastic IP to the server upon startup by using ec2-associate-address, and this seemed to help make a MySQL connection to another outside server working, but no luck with the above mentioned API.
To rule out one thing - the API is accessed through HTTPS, with ports 80 and 443 (and a mysql port) enabled in security groups and tested. The domain and SSL are running fine.
Any hint highly appreciated - I searched a lot already, but couldn't find anything useful so far.
It sounds like both IPs (private and elastic) are active in your VM. Check by running ifconfig -a. If that's what's happening then the IP that gets used for external traffic will depend on the remote address and your VM's routing table. It could even vary from one connection to the next.
If that's what's going on then the quickest fix would be to ifconfig down the interface that has the private address. That should leave only the elastic address for all external connections. If that resolves the problem then you can script something that downs the private IP automatically after the elastic IP has been made active, or if the elastic IP will be permanently assigned to this VM and you really don't need the private IP then you can permanently disassociate the private IP from this VM.