I am trying to generate traffic (telnet, voip,...) with the d-itg tool:
-I start the receiver :ITGRecv whose ip address is : 10.1.1.2 in xterm of mininet SDN
-I start the telnet traffic sender in xterm with the command: ITGSend -a 10.1.1.2 -rp 32769 -C 100 -c 500 -t 20000 -x recv_log_file I get the following error with this command: Connect error in createTransportChan(): No route to host
Related
I want to debug another machine on my network but have to pass through one or more SSH tunnels to get there.
Currently:
# SSH into one machine
ssh -p 22 me#some_ip -i ~/.ssh/00_id_rsa
# From there, SSH into the target machine
# Note that this private key lives on this machine
ssh -p 1234 root#another_ip -i ~/.ssh/01_id_rsa
# Capture debug traffic on the target machine
tcpdump -n -i eth0 -vvv -s 0 -XX -w tcpdump.pcap
But then it's a pain to successively copy that .pcap out. Is there a way to write the pcap directly to my local machine, where I have wireshark installed?
You should use ProxyCommand to chain ssh hosts and to pipe output of tcpdump directly into wireshark. To achieve that you should create the following ssh config file:
Host some_ip
IdentityFile ~/.ssh/00_id_rsa
Host another_ip
Port 1234
ProxyCommand ssh -o 'ForwardAgent yes' some_ip 'ssh-add ~/.ssh/01_id_rsa && nc %h %p'
I tested this with full paths, so be carefull with ~
To see the live capture you should use something like
ssh another_ip "tcpdump -s0 -U -n -w - -i eth0 'not port 1234'" | wireshark -k -i -
If you want to just dump pcap localy, you can redirect stdout to filename of your choice.
ssh another_ip "tcpdump -n -i eth0 -vvv -s 0 -XX -w -" > tcpdump.pcap
See also:
https://serverfault.com/questions/337274/ssh-from-a-through-b-to-c-using-private-key-on-b
https://serverfault.com/questions/503162/locally-examine-network-traffic-of-remote-machine/503380#503380
How can I have tcpdump write to file and standard output the appropriate data?
I have a very simple Vagrantfile:
config.vm.define "one" do |one|
one.vm.box = "centos/7"
end
config.ssh.insert_key = false
end
(Note it was creating vm but exiting with a failure untill I installed vbguest plugin)
After vm was created I wanted to execute a simple Ansible job. My inventory file (Vagrant forwarded 22 port on guest to 2222 on host):
[one]
127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=C:/Users/Lukasz/.vagrant.d/insecure_private_key
And here's the Docker command (from Windows cmd):
docker run --rm -v /c/Users/Lukasz/ansible/ansible:/home:rw -w /home williamyeh/ansible:ubuntu14.04 ansible-playbook -i inventory/testvms site.yml --check -vvvv
Finally, here's the output of the command:
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o 'IdentityFile="C:/Users/Lukasz/.vagrant.d/insecure_private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o PreferredAuthentications=privatekey -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" && echo ansible-tmp-1488381378.63-13786642598588="` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" ) && sleep 0'"'"''
fatal: [127.0.0.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/root/.ansible/cp/ansible-ssh-127.0.0.1-2222-vagrant\" does not exist\r\ndebug2: resolving \"127.0.0.1\" port 2222\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 127.0.0.1 port 2222: Connection refused\r\nssh: connect to host 127.0.0.1 port 2222: Connection refused\r\n",
"unreachable": true
}
I can ssh to this VM manualy with no problem - specifying user, port and private key.
Am I doing something wrong?
EDIT 1:
I have mounted folder with the private key: -v /c/Users/Lukasz/.vagrant.d/:/home/.ssh/ and refer to it from inventory file: ansible_ssh_private_key_file=/home/.ssh/insecure_private_key. Also assigned a static IP in the vagrantfile, and used it in docker command. Errror now is "Connection timed out".
There's a misunderstanding of how loopback addresses work and also an underestimation of how complex system you actually run.
In the scenario described in your question, you are running four machines with four separate network stacks:
a physical machine Windows
a CentOS VM (supposedly running under VirtualBox, orchestrated by Vagrant)
a Docker Linux machine which is running in the background when you install Docker for Windows (judging from your sentence "the docker command (from windows cmd)")
an Ansible container running under the Docker's Linux machine
Each of these machines has its own loopback address (127.0.0.1) which is not accessible from any other machine.
You have one port mapping:
Vagrant set a mapping for tnt CentOS virtual machine under the control of VirtualBox so that the VM's port 22 is accessible on the Windows machine loopback address (127.0.0.1) port 2222.
And thus you can connect with SSH client from Windows to the CentOS machine.
However, Docker for Windows runs a separate Linux machine and configures the docker command so that when you execute docker from Windows command-line prompt, you actually work directly on this Linux machine (as you run containers, you don't actually need to access this Docker host directly, so you can be unaware of its existence).
Like it was not enough, each container you run will have its own loopback 127.0.0.1 address.
As a result there is no way an Ansible container would reach the loopback address of your physical Windows machine.
Probably the easiest solution would be to configure the CentOS box to run on a public network, with a static IP address (see Vagrant: Public Networks) by adding for example the following line to the Vagrantfile:
config.vm.network "public_network", ip: "192.168.0.17"
Then you should use this address in the inventory file and follow Konstantin's advice to make the private key available to the container:
[one]
192.168.0.17 ansible_ssh_user=vagrant ansible_ssh_private_key_file=/path/to/insecure_private_key/mapped/inside/container
It seems that you specify windows path for ansible_ssh_private_key_file in your inventory, but use this inventory from inside the container.
You should map C:/Users/Lukasz/.vagrant.d/ into your container and set ansible_ssh_private_key_file from container's perspective.
I have URL and PORT of remote Redis server. I am able to write into Redis from Scala. However I want to connect to remote Redis via terminal using redis-server or something similar in order to make several call of hget, get, etc. (I can do it with my locally installed Redis without any problem).
redis-cli -h XXX.XXX.XXX.XXX -p YYYY
xxx.xxx.xxx.xxx is the IP address and yyyy is the port
EXAMPLE from my dev environment
redis-cli -h 10.144.62.3 -p 30000
REDIS CLI COMMANDS
Host, port, password and database By default redis-cli connects to the
server at 127.0.0.1 port 6379. As you can guess, you can easily change
this using command line options. To specify a different host name or
an IP address, use -h. In order to set a different port, use -p.
redis-cli -h redis15.localnet.org -p 6390 ping
There are two ways to connect remote redis server using redis-cli:
1. Using host & port individually as options in command
redis-cli -h host -p port
If your instance is password protected
redis-cli -h host -p port -a password
e.g. if my-web.cache.amazonaws.com is the host url and 6379 is the port
Then this will be the command:
redis-cli -h my-web.cache.amazonaws.com -p 6379
if 92.101.91.8 is the host IP address and 6379 is the port:
redis-cli -h 92.101.91.8 -p 6379
command if the instance is protected with password pass123:
redis-cli -h my-web.cache.amazonaws.com -p 6379 -a pass123
2. Using single uri option in command
redis-cli -u redis://password#host:port
command in a single uri form with username & password
redis-cli -u redis://username:password#host:port
e.g. for the same above host - port configuration command would be
redis-cli -u redis://pass123#my-web.cache.amazonaws.com:6379
command if username is also provided user123
redis-cli -u redis://user123:pass123#my-web.cache.amazonaws.com:6379
This detailed answer was for those who wants to check all options.
For more information check documentation: Redis command line usage
In Case of password also we need to pass one more parameter
redis-cli -h host -p port -a password
One thing that confused me a little bit with this command is that if redis-cli fails to connect using the passed connection string it will still put you in the redis-cli shell, i.e:
redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected>
You'll then need to exit to get yourself out of the shell. I wasn't paying much attention here and kept passing in new redis-cli commands wondering why the command wasn't using my passed connection string.
if you got Error: Server closed the connection
try with --tls switch:
redis-cli --tls -h my-redis.redis.cache.windows.net -p 6379 -a myRedisPassword
h 👉 hostname
p 👉 port
a 👉 password
I've started sock program like this:
me#ASUS $ ./sock -v -s -F -j 224.0.0.1 -u 127.0.0.11 5555
IP_ADD_MEMBERSHIP set
But, when I try to connect a client, I get this error:
me#ASUS $ ./sock 127.0.0.11 5555 -j 224.0.0.1
IP_ADD_MEMBERSHIP setsockopt error: Address already in use
Am I invoking clients wrong? How to connect multiple multicast clients on a single host to a server?
Thanks.
I was trying to figure this out and bumped into this. The following commands worked for me (same host or from different hosts):
sock -u -i -n 10 224.0.0.4 1234
sock -v -s -i -u -j 224.0.0.4 1234
The second is the receiver (what most people call the server but that term confuses me for UDP stuff).
i have trouble setting up a JMeter client to connect to a remote JMeter server over an intermediate jumphost.
Especially which ports need to be open and forwarded to which host and how to configure JMeter for that. Apparently there are some blog posts about similar setups but neither describes the ports in detail nor do the connect over an external host (all use localhost?).
The setups is:
JMeter GUI(client) <-> Jumphost <-> JMeter Server
I need to setup one or more SSH Tunnels on the Jumphost and tell the Client and server to connect to this host.
Help will be much appreciated!
http://rolfje.wordpress.com/2012/02/16/distributed-jmeter-through-vpn-and-ssl/
Here I see ports in the article:
-A RH-Firewall-1-INPUT -p udp -m udp --dport 1099 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 1099 -j ACCEPT
-A RH-Firewall-1-INPUT -p udp -m udp --dport 50000 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 50000 -j ACCEPT
Tried with Java 8
1. Client - modify jmeter.properties file adding:
remote_hosts=127.0.0.1:55511
client.rmi.localport=55512
2. Server - modify jmeter.properties file adding:
server_port=55511
server.rmi.localhostname=127.0.0.1
server.rmi.localport=55511
3. Connect to the server using:
Linux and Mac users
ssh solr#server -L 55511:127.0.0.1:55511 -R 55512:127.0.0.1:55512
Windows users
putty.exe -ssh user#server -L 55511:127.0.0.1:55511 -R 55512:127.0.0.1:55512
4. Server - start jmeter
cd apache-jmeter-2.13/bin/
./jmeter-server -Djava.rmi.server.hostname=127.0.0.1
5. Client - start jmeter
cd apache-jmeter-2.13/bin/
./jmeter.sh -Djava.rmi.server.hostname=127.0.0.1 -t test.jmx