Expo + vagrant, metro bundle doesn't work - react-native

I created a VM to work on expo. I can't run Metro Bundle in browser in my host on 19002 port. Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
REACT_NATIVE_PACKAGER_HOSTNAME = Socket.ip_address_list.find { |ai| ai.ipv4? && !ai.ipv4_loopback? }.ip_address
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "geerlingguy/ubuntu1804"
config.ssh.insert_key = false
config.vm.provider :virtualbox do |v|
v.name = "mobile-app"
v.memory = 2048
v.cpus = 1
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--ioapic", "on"]
end
config.vm.synced_folder "./", "/home/vagrant/workspace", type: 'nfs', mount_options: ['nolock,vers=3,udp,noatime']
config.vm.hostname = "mobile-app"
config.vm.network :private_network, ip: "192.168.33.40"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.network "forwarded_port", guest: 19000, host: 19000
config.vm.network "forwarded_port", guest: 19001, host: 19001
config.vm.network "forwarded_port", guest: 19002, host: 19002
config.vm.network "forwarded_port", guest: 19006, host: 19006
config.vm.provision "shell", path: "install.sh"
config.vm.provision "set_lan_ip", "type": "shell" do |installs|
installs.inline = "
echo 'export REACT_NATIVE_PACKAGER_HOSTNAME=#{REACT_NATIVE_PACKAGER_HOSTNAME}' >> /home/vagrant/.zshrc
"
end
end
When I run npm start I see information that expo start. I can see url and qr image:
Starting project at /home/vagrant/workspace/app
Expo DevTools is running at http://localhost:19002
Opening DevTools in the browser... (press shift-d to disable)
Starting Metro Bundler
When I run localhost:19002 and 192.168.33.40:19002, I see: This site can’t be reached. But when I launch expo start:web and open: 192.168.33.40:19006 works fine... When I run curl localhost:19002 on guest, I can see html, but the same cmd on host gets: Recv failure. (192.168.33.40 the same error).
I checked ports:
node 1116 vagrant 21u IPv4 21603 0t0 TCP 127.0.0.1:19002 (LISTEN)
node 1116 vagrant 22u IPv6 21661 0t0 TCP *:19000 (LISTEN)
node 1160 vagrant 20u IPv6 21720 0t0 TCP *:19001 (LISTEN)
and where web is active:
node 1220 vagrant 22u IPv4 22679 0t0 TCP *:19006 (LISTEN)
I think that it may be a problem with port. 19002 is mapped on localhost(127.0.0.1), but should allowed on all interfaces. Can I set it in expo?
Where is my mistake?
resources: https://github.com/jean553/react-native-dev, Expo and Vagrant

Solution:
export EXPO_DEVTOOLS_LISTEN_ADDRESS=192.168.33.40
I used a other parameter but this one is correct...

Related

Vagrant multi vm ssh connection setup works on one but not the others

I have searched many of the similar issues but can't seem to figure out the one I'm having. I have a Vagrantfile with which I setup 3 VMs. I add a public key to each VM so I can run Ansible against the boxes after vagrant up command (I don't want to use the ansible provisioner). I forward all the SSH ports on each box.
I can vagrant ssh <server_name> on to each box successfully.
With the following:
ssh vagrant#192.168.56.2 -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#192.168.56.3 -p 2712 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.3 port 2712: Connection refused
ssh vagrant#192.168.56.4 -p 2713 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.4 port 2713: Connection refused
And
ssh vagrant#localhost -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#localhost -p 2712 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#localhost -p 2713 -i ~/.ssh/ansible <-- successful connection
Ansible can connect to the first one (vagrant#192.168.56.2) but not the other 2 also. I can't seem to find out why it connects to one and not the others. Any ideas what I could be doing wrong?
The Ansible inventory:
{
"all": {
"hosts": {
"kubemaster": {
"ansible_host": "192.168.56.2",
"ansible_user": "vagrant",
"ansible_ssh_port": 2711
},
"kubenode01": {
"ansible_host": "192.168.56.3",
"ansible_user": "vagrant",
"ansible_ssh_port": 2712
},
"kubenode02": {
"ansible_host": "192.168.56.4",
"ansible_user": "vagrant",
"ansible_ssh_port": 2713
}
},
"children": {},
"vars": {}
}
}
The Vagrantfile:
# Define the number of master and worker nodes
NUM_MASTER_NODE = 1
NUM_WORKER_NODE = 2
PRIV_IP_NW = "192.168.56."
MASTER_IP_START = 1
NODE_IP_START = 2
# Vagrant configuration
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# default box
config.vm.box = "ubuntu/jammy64"
# automatic box update checking.
config.vm.box_check_update = false
# Provision master nodes
(1..NUM_MASTER_NODE).each do |i|
config.vm.define "kubemaster" do |node|
# Name shown in the GUI
node.vm.provider "virtualbox" do |vb|
vb.name = "kubemaster"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubemaster"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{MASTER_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
# argo and traefik access
node.vm.network "forwarded_port", guest: 8080, host: "#{8080}"
node.vm.network "forwarded_port", guest: 9000, host: "#{9000}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder "sync_folder", "/vagrant_data", create: true, owner: "root", group: "root"
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
# Provision Worker Nodes
(1..NUM_WORKER_NODE).each do |i|
config.vm.define "kubenode0#{i}" do |node|
node.vm.provider "virtualbox" do |vb|
vb.name = "kubenode0#{i}"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubenode0#{i}"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{NODE_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2711 + i}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
end
Your Vagrantfile confirms what I suspected:
You define port forwarding as follows:
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
That means, port 22 of the guest is made reachable on the host under port 2710+i. For your 3 VMs, from the host's point of view, this means:
192.168.2.1:22 -> localhost:2711
192.168.2.2:22 -> localhost:2712
192.168.2.3:22 -> localhost:2713
As IP addresses for your VMs you have defined the range 192.168.2.0/24, but you try to access the range 192.168.56.0/24.
If a Private IP address is defined (for your 1st node e.g. 192.168.2.2), Vagrant implements this in the VM on VirtualBox as follows:
Two network adapters are defined for the VM:
NAT: this gives the VM Internet access
Host-Only: this gives the host access to the VM via IP 192.168.2.2.
For each /24 network, VirtualBox (and Vagrant) creates a separate VirtualBox Host-Only Ethernet Adapter, and the host is .1 on each of these networks.
What this means for you is that if you use an IP address from the 192.168.2.0/24 network, an adapter is created on your host that always gets the IP address 192.168.2.1/24, so you have the addresses 192.168.2.2 - 192.168.2.254 available for your VMs.
This means: You have for your master a collision of the IP address with your host!
But why does the access to your first VM work?
ssh vagrant#192.168.56.1 -p 2711 -i ~/.ssh/ansible <-- successful connection
That is relatively simple: The network 192.168.56.0/24 is the default network for Host-Only under VirtualBox, so you probably have a VirtualBox Host-Only Ethernet Adapter with the address 192.168.56.1/24.
Because you have defined a port forwarding in your Vagrantfile a mapping of the 1st VM to localhost:2711 takes place. If you now access 192.168.56.1:2711, this is your own host, thus localhost, and the SSH of the 1st VM is mapped to port 2711 on this host.
So what do you have to do now?
Change the IP addresses of your VMs, e.g. use 192.168.2.11 - 192.168.2.13.
The access to the VMs is possible as follows:
Node
via Guest-IP
via localhost
kubemaster
192.168.2.11:22
localhost:2711
kubenode01
192.168.2.12:22
localhost:2712
kubenode02
192.168.2.13:22
localhost:2713
Note: If you want to access with the guest IP address, use port 22, if you want to access via localhost, use port 2710+i defined by you.

SSH forwarding does not work for vagrant

I set up ssh params of Vagrant 1.8.1 as described here
Shortly, I got on host ssh config file:
Host bitbucket.org
Hostname bitbucket.org
IdentityFile ~/.ssh/id_bitbucket
User zuba
ForwardAgent yes
in Vagrantfile:
config.ssh.forward_agent = true
On host machine ssh-add -L shows the key, while on vagrant box it reports that the agent has no identities and git clone fails due to authentication failure
How to solve this issue?
UPDATE 1:
vagrant ssh -c 'ssh-add -l' shows the key
> vagrant ssh-config
Host p4
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/zuba/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
UPDATE 2:
found the duplicate post with no answers vagrant ssh agent forwarding only works for inline commands?
UPDATE 3:
Here it is my Vagrantfile:
Vagrant.configure("2") do |config|
boxes = {
"p4" => "10.2.2.15",
}
boxes.each do |box_name, box_ip|
config.vm.define box_name do |config|
config.vm.box = "trusty-64"
config.vm.box_url = "https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
config.vm.hostname = "p4"
config.vm.network :private_network, ip: box_ip
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.network "forwarded_port", guest: 3001, host: 3001
config.vm.network "forwarded_port", guest: 3002, host: 3002
config.vm.network "forwarded_port", guest: 3003, host: 3003
config.vm.network "forwarded_port", guest: 6379, host: 6379 # Redis
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.name = "p4"
# Use VBoxManage to customize the VM. For example to change memory:
vb.customize ["modifyvm", :id, "--memory", "1024"]
end
config.vm.synced_folder "../..", "/home/vagrant/my_src"
config.ssh.forward_agent = true # to use host keys added to agent
# provisioning
config.vm.provision :shell, :inline => "sudo apt-get update"
config.vm.provision "chef_solo" do |chef|
chef.log_level = "info"
chef.environment = "development"
chef.environments_path = "environments"
chef.cookbooks_path = ["cookbooks", "site-cookbooks"]
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.json.merge!(JSON.parse(IO.read("nodes/#{box_ip}.json")))
end
config.exec.commands '*', directory: '/home/vagrant'
config.exec.commands 'apt-get', prepend: 'sudo'
config.exec.commands %w[rails rspec rake], prepend: 'bundle exec'
end
end
end
Finally I found that post which helped me to figure out what prevented vagrant from using agents key.
I ssh-add the key in one GNU screen session, while doing vagrant ssh in another screen session. That is why ssh-agent was kinda 'inaccessible' to the vagrant.
When I added the key and ssh-ed vagrat in the same screen session, everything started working

Vagrant permissions issue

Here is my vagrant file. The problem is inside `var/www I can't set particular folder's permissions using the configuration below. For example, var/www/sample folder must be set 777 permission. But I can't do it, neither using root nor vagrant accounts. Tried to change moun's type into rsync. Still the same problem.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# check and install required Vagrant plugins
required_plugins = ["vagrant-hostmanager"]
required_plugins.each do |plugin|
if Vagrant.has_plugin?(plugin) then
system "echo OK: #{plugin} already installed"
else
system "echo Not installed required plugin: #{plugin} ..."
system "vagrant plugin install #{plugin}"
end
end
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
config.vm.provision :shell,
keep_color: true,
path: "provision/setup.sh"
config.vm.box_check_update = true
config.vm.network "private_network", ip: "192.168.56.10"
config.vm.synced_folder '.', '/vagrant', disabled: true
config.vm.synced_folder "./", "/var/www", create: true, group: "vagrant", owner: "vagrant", type: "rsync"
config.vm.provider "virtualbox" do |vb|
vb.name = "Web Server"
vb.gui = false
vb.memory = "512"
end
end
What am I doing wrong?
change to
config.vm.synced_folder "./", "/var/www", group: "vagrant", owner: "vagrant", mount_options: ["dmode=777, fmode=664"]
This makes the directory with 777 mode and files with 664 - you can adjust those values based on your needs

(vagrant & ssh) require password

my Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty32"
config.vm.box_check_update = false
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "./synced/", "/home/vagrant/"
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
config.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
vb.name = "test Ubuntu 14.04 box"
end
end
When I try execute
vagrant ssh
ssh requires password.
But Vagrant should use my local ssh key and do not require password.
I've faced the same issue. The problem is you're trying to synch into guest's home folder. I've found the solution here, please refer to that post for more info. You need to change your synch paths.
Instead of
config.vm.synced_folder "./synced/", "/home/vagrant/"
do
config.vm.synced_folder "./synced/", "/home/vagrant/mySyncFolder"
Do you have the line like below in your ~/.ssh/config ?
PubkeyAcceptedKeyTypes ssh-dss,ssh-rsa
In my case, after removing this line, vagrant ssh stopped asking me for password.

Accessing Vagrant VM (Vitualbox running Centos/7) from inside cluster

I am currently trying to get into ansible and for that usecase i have setup a cluster of 3 VMs using VirtualBox and Vagrant. Now my VM-Setup looks like this
Vagrantfile
$inline_m1 = <<SCRIPT
yum -y update
yum install -y git
yum install -y ansible
SCRIPT
$inline_n1_n2 = <<SCRIPT
yum -y update
yum install -y git
SCRIPT
Vagrant.configure(2) do |config|
config.vm.define "master1" do |conf|
# conf.vm.box = "peru/my_centos-7-x86_64"
# conf.vm.box_version = "20181211.01"
conf.vm.box = "centos/7"
conf.vm.hostname = 'master1.vg'
conf.vm.network "private_network", ip: "192.168.255.100"
conf.vm.provider "virtualbox" do |v|
v.memory = 6144
v.cpus = 2
end
conf.vm.provision "shell", inline: $inline_m1
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc/hosts"
conf.vm.provision "file", source: "./master1/etc.ansible.hosts", destination: "~/etc/ansible.hosts"
end
config.vm.define "node1" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'node1.vg'
conf.vm.network "private_network", ip: "192.168.255.101"
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc/hosts"
conf.vm.provision "shell", inline: $inline_n1_n2
end
config.vm.define "node2" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'node2.vg'
conf.vm.network "private_network", ip: "192.168.255.102"
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc/hosts"
conf.vm.provision "shell", inline: $inline_n1_n2
end
end
so it is 1 Master and 2 Nodes. The master is supposed to have ansible installed and access the nodes via ssh. So all machines are up and runnin and I can connect to my master using
vagrant ssh master1
I also have my modified etc/hosts so i can reach master1.vg, node1.vg etc.
But there is one problem. I am supposed to connect via ssh to the nodes from inside the master. but
ssh node1.vg
will not work as permission is denied after asking for a password. according to the documentation the default password should be "vagrant" but this is not the case here. (I guess as the access method is already set to ssh with a key). I have googled for quite a bit as I thought this would be a common question but found no satisfiing answers. Do you have any idea how to make a connection via ssh from master1 vm to one of the node vms?
I've also uploaded the config to a repo (https://github.com/relief-melone/vagrant-ansibletestingsetup)
OK I solved it now. Now Vagrant will generate your private keys you will need to get that key into your master VM with the correct permissions. You will also need to set upo your network correcty. So lets first tackle the network point.
Your /etc/hosts will have to be set up. In my setup it will look like this
/etc/hosts
192.168.255.100 master1.me.vg
192.168.255.101 node1.me.vg
192.168.255.102 node2.me.vg
Your private keys will be stored in ./.vagrant/machines/nodeX/virtualbox/private_key. You will need all the nodes you want to access from your master so this leaves us with the following
Vagrantfile
Vagrant.configure(2) do |config|
config.vm.define "node1" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'node1.me.vg'
conf.vm.network "private_network", ip: "192.168.255.101"
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc.hosts"
conf.vm.provision "shell", path: "./node/shell.sh"
end
config.vm.define "node2" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'node2.me.vg'
conf.vm.network "private_network", ip: "192.168.255.102"
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc.hosts"
conf.vm.provision "shell", path: "./node/shell.sh"
end
config.vm.define "master1" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'master1.me.vg'
conf.vm.network "private_network", ip: "192.168.255.100"
conf.vm.provider "virtualbox" do |v|
v.memory = 6144
v.cpus = 2
end
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc.hosts"
conf.vm.provision "file", source: "./master1/etc.ansible.hosts", destination: "~/etc.ansible.hosts"
conf.vm.provision "file", source: "./.vagrant/machines/node1/virtualbox/private_key", destination: "~/keys/node1"
conf.vm.provision "file", source: "./.vagrant/machines/node2/virtualbox/private_key", destination: "~/keys/node2"
conf.vm.provision "shell", path: "./master1/shell.sh"
end
end
At last you will have to set the permissions of the private keys as a too open permission set will be rejected on ssh later. My shell files look like this
./master1/shell.sh
yum
-y update
yum install -y git
yum install -y ansible
cp /home/vagrant/etc.hosts /etc/hosts
cp /home/vagrant/etc.ansible.hosts /etc/ansible/hosts
chmod 600 /home/vagrant/keys/*
./node/shell.sh
yum -y update
yum install -y git
cp /home/vagrant/etc.hosts /etc/hosts
After all that is done
vagrant up
should run smoothly and you can go to your master vm using
vagrant ssh master1
in that master you can now connect to e.g. the node2 machine using
ssh -i ~/keys/node2
As this is a set with quite an amount of files I also put this into a repo which can be found here
https://github.com/relief-melone/vagrant-ansibletestingsetup/tree/working-no-comments