Trying to create a virtual topology using miniedit that should use mqtt traffic - virtual-machine

i have to create a virtual topology with miniedit that has to talk using the mqtt sub/pub system.
i'm working on virtualbox (mininet-wifi)
i have installed mosquitto & the clients... using the terminals i have no problem with:
mosquitto_sub -t test
mosquitto_pub -t test -m hello!
but when i emulate the topology on Miniedit with (controller,switch and two hosts), the hosts cannot talk using mosquitto, i think that there is no broker that can handle the communications in the virtual topology, any suggestions?
I tried to connect also to a remote server, using cloudmqtt but i only got failed connection
i expect that using
xterm h1 h2
on the Comand Line Interface of miniedit, i would be capable of make the two hosts talks beetween them using
mosquitto_sub/pub system, because also in the xterm of the host, if i type
service mosquitto status
i obtain that
mosquitto is active
UPDATE
solved.
i just have to run another host in which i type "mosquitto" and the others hosts would just reach it using "mosquitto_sub/pub - h 10.0.0.3 for example

2 brokers (1 on each hosts) won't automatically discover each other when the "link" comes up.
You will have to either manually configure the a bridge between the 2 brokers if you want messages to be shared.
Or pick one and have the clients explicitly connect to that one broker. e.g. the -h option for the mosquitto_pub or mosquitto_sub commands.

I agree with the solution. Let me give more in depth explanation.
Run basic mininet topo with 4 hosts and 4 switches.
mn --topo linear,4
Then open xterm for 3 hosts
xterm h1 h2 h3
Three terminals will pop up. One of them will be the host. on h3's(10.0.0.3) xterm terminal run
mosquitto
On h2(10.0.0.2) subscribe to the topic with;
mosquitto_sub -h 10.0.0.3 -t "home/bedroom/light"
On h1(10.0.0.1) publish a message by;
mosquitto_pub -h 10.0.0.3 -t "home/bedroom/light" -m "ON"
You can now see the message on h2's terminal. Hope it helps.

Related

GitLab CI runner with SSH ProxyJump

I have the following settings in my /etc/ssh/ssh_config file:
Host serverA
User idA
Host serverB
User idB
ProxyJump serverA
I’ve also copied the public keys, so if I locally run ssh serverB I’m correctly connected to serverB as idB through serverA.
Now, here’s my runner configuration in /etc/gitlab-runner/config.toml:
[[runners]]
name = "ssh-runner-1"
url = "http://my-cicd-server"
token = "xxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.ssh]
user = "idB"
host = "serverB"
identity_file = "/home/gitlab-runner/.ssh/id_ed25519"
When I run a CI/CD job on this runner I get a « connection refused » error:
ERROR: Preparation failed: ssh command Connect() error: ssh Dial() error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
I conclude that the ProxyJump configuration is not applied, and since the machine with the runner can’t directly connect to serverB, I get denied access.
How can I configure the runner to apply the proxy jump configuration?
The GitLab runner uses a Go-based SSH client. It does not respect your system SSH configuration and does not have the same configurability as the standard ssh (usually OpenSSH) packages you typically find installed in operating system distributions or similar packages.
The Go client does not support the ProxyJump configuration.
Your best bet would probably be to configure a tunneled connection where your entrypoint does not require SSH configuration options that are not supported.
Local port forwarding
One way might be to open a local port-forwarding tunnel, then in your GitLab configuration, specify the host as localhost and port as the forwarded port.
For example:
Open the tunnel -- local port 2222 forwards to port 22 on ServerB via ssh connection through ServerA
ssh -L 2222:ServerB:22 -N ServerA
Configure runner to use the tunnel:
...
[runners.ssh]
host = "localhost"
port = 2222
...
With this approach, you may have to write some automation on your server to restore the tunnel connection in the event it is broken. How you might do this depends on your operating system and preferred service manager. Or use a tool like autossh
This is basically how the ProxyJump configuration works under the hood.
IP/Port forwarding system
A similar approach would be to have your jump system automatically forward connections to the desired destination. This might be something like a software firewall rule (e.g. using iptables routing rules). That way the forwarding occurs transparently. Then simply tell the runner to target ServerA and the traffic will be transparently moved to ServerB.
This approach is more reliable, since you won't have to do anything to keep the tunnel alive if it ever drops. However, the configuration is much more complex and requires a static IP for ServerB.
For example, on ServerA, assuming the IP of ServerB is 10.10.10.10 the following iptables configuration could be used:
iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.10.10.10:22
iptables -t nat -A POSTROUTING -j MASQUERADE
reference.
Then the GitLab runner configuration:
...
[runners.ssh]
host = "ServerA"
port = 2222
...
Lastly, it may also be useful to know that disable_strict_host_key_checking is an undocumented configuration option for the runner as well, in the event you need this.

SFTP, SSH & SSH Tunneling

I would like to understand the concept of SSH tunneling in detail as I am learning a few things around this topic. I have gone through some details in public forum but still got a few questions.
An SFTP service is running in a remote server and I have been given credentials to connect to it. I am using GUI like WinScp to connect the remote server. What's the role of SSH tunneling here?
Remote SFTP Server admin asked me to generate RSA public key from my machine and its added to the remote server. Now, I can directly connect to the server from SSH terminal without password. What's the role of SSH tunneling here?
Is tunneling implicit or need to be called explicitly for certain circumstances?
Please clarify.
SSH tunneling, SSH console sessions and SFTP sessions are functionally unrelated things.
They can be used simultaneously during single session but usually it is not the case so do not try to find any relation or role of tunneling in ssh/sftp session.
It does not makes sense to mix ssh tunneling with multiple ssh/sftp sessions.
Basically you would use dedicated ssh session for tunneling and extra sessions for console and transfers.
What the heck SSH tunneling is?
Quite often both parties (you and server) reside in different networks where arbitrary network connections between such networks are impossible.
For example server can see on its network workstation nodes and service nodes which are not visible to outside network due to NAT.
The same is valid for the user who initiates connection to the remote server:
so you (ssh client) can see your local resources (worstation nodes and server nodes) but can't see nodes on network of remote server.
Here comes ssh tunneling.
SSH tunnel is NOT a tool to assist ssh related things like remote console ssh sessions and secure file transfers but quite other way around - it is ssh protocol who assists you with building transport to tunnel generic TCP connections the same way TCP proxy works. Once such pipe is built and in action it does not know what is getting transferred via such pipe/tunnel.
Its concept is similar to TCP proxy.
TCP proxy runs on single node so it serves as acceptor of connections and as iniciator of outgoing connections.
In case of SSH tunneling such concept of TCP proxy is split in two halves - one of the nodes (participating in ssh session) performs role of listener(acceptor of connections) and second node performs role of proxy (i.e. initiates outgoing connections).
When you establish the SSH session to the remote server you can configure two types of tunnels which are active while your ssh connection is active.
Multiple ssh clients use notations like
R [IP1 :] PORT1 : IP2 : PORT2
L [IP1 :] PORT1 : IP2 : PORT2
The most confusing/hard part to understand in this ssh tunneling thing are these L and R markers/switches(or whatever).
Those letter L and R can confuse beginners quite a lot because there are actually 6(!!!) parties in this game(each with its own point of view of what is local and what is remote):
you
ssh server
your neighbors who want to expose theirs ports to anyone who sees the server
your neighbors who want to connect to any service server sees
anyone who sees the server and want to connect to any service your
neighbor provides (opposite side/socket of case #3)
any service in a local network of server who wants to be exposed to
your LAN (opposite side/socket of case#4)
In terms of ssh client these tunnel types are:
"R" tunnel (server listens) - YOU expose network services from your LOCAL LAN to remote LAN (you instruct sshd server to start listening ports at remote side and route all incoming connections )
"L" tunnel (you listens) - Server exposes resources of its REMOTE LAN to your LAN (your ssh client starts listening ports on your workstation. your neighbors can access remote server network services by connecting to the ports of your workstation. server makes outgoing connections to local services on behalf of your ssh client)
So SSH tunneling is about providing access to the service which typically is inaccessible due to network restrictions or limitations.
And here is simple conter-intuitive rule to remember while creating tunnels:
to open access to Remote service you use -L switch
and
to open access to Local service you use -R switch
examples of "R" tunnels:
Jack is your coworker(backend developer) and he develops server-side code at his workstation with IP address 10.12.13.14. You are team lead (or sysadmin) who organizes working conditions. You are sitting in the same office with Jack and want to expose his web server to outside world through remote server.
So you connect to ssh server with following command:
ssh me#server1 -g -R 80:ip-address-of-jack-workstation:80
in such case anyone on the Internet can access Jack's current version of website by visiting http://server1/
Suppose there are many IoT Linux devices (like raspberry pi) in the world sitting in multiple home networks and thus not accessible from outside.
They could connect to the home server and expose theirs own port 22 to the server for admin to be able to connect to all those servers.
So RPi devices could connect to the server in a such way:
RPi device #1
ssh rpi1#server -R 10122:localhost:22
RPi device #2
ssh rpi1#server -R 10222:localhost:22
RPi device #3
ssh rpi1#server -R 10322:localhost:22
and sysadmin while being at server could connect to any of them:
ssh localhost -p 10122 # to connecto first device
ssh localhost -p 10222 # to connecto second device
ssh localhost -p 10322 # to connecto third device
admin on remote premises blocked ssh outgoing connections and you want production server to contact bitbucket through your connection...
#TODO: add example
Typical pitfalls in ssh tunneling:
mapping remote service to local priviledged port
ssh me#server -L 123:hidden-smtp-server:25 # fails
#bind fails due to priviledged ports
#we try to use sudo ssh to allow ssh client to bind to local port switches
sudo ssh me#server -L 123:hidden-smtp-server:25 # fails
#this usually results to rejected public keys because ssh looks for the key in /root/.ssh/id_rsa
#so you need to coerce ssh to use your key while running under root account
sudo ssh me#server -i /home/me/.ssh/id_rsa -L 123:hidden-smtp-server:25
exposing some service from local network to anyone through the public server:
typical command would be
ssh me#server -R 8888:my-home-server:80
#quite often noone can't connect to server:8888 because sshd binds to localhost.
#To make in work you need to edit /etc/ssh/sshd_config file to enable GatewayPorts (the line in file needs to be GatewayPorts yes).
my tunnel works great on my computer for me only but I would like my coworkers to access my tunnel as well
typical working command you start with would be
ssh me#server -L 1234:hidden-smtp-server:25
#by default ssh binds to loopback(127.0.0.1) and that is the reason why noone can use such tunnel.
#you need to use switch -g and probably manually specify bind interface:
ssh me#server -g -L 0.0.0.0:1234:hidden-smtp-server:25

Why SSH is not working in kubernetes pods/container?

We have an application which uses SSH to copy artifact from one node to other. While creating the Docker image (Linux Centos 8 based), I have installed the Openssh server and client, when I run the image from Docker command and exec into it, I am successfully able to run the SSH command and I also see the port 22 enabled and listening ( $ lsof -i -P -n | grep LISTEN).
But if I start a POD/Container using the same image in the Kubernetes cluster, I do not see port 22 enabled and listening inside the container. Even if I try to start the sshd from inside the k8s container then it gives me below error:
Redirecting to /bin/systemctl start sshd.service Failed to get D-Bus connection: Operation not permitted.
Is there any way to start the K8s container with SSH enabled?
There are three things to consider:
Like David said in his comment:
I'd redesign your system to use a communication system that's easier
to set up, like with HTTP calls between pods.
If you put a service in front of your deployment, it is not going to relay any SSH connections. So you have to point to the pods directly, which might be pretty inconvenient.
In case you have missed that: you need to declare port 22 in your deployment template.
Please let me know if that helped.

X11 forwarding of a GUI app running in docker

First off: I have read the answers to similar questions on SO, but none of them worked.
IMPORTANT NOTE: The answer below is still valid, but maybe jump to the end for an alternative.
The situation:
App with GUI is running in a docker container (CentOS 7.1) under Arch Linux. (machine A)
Machine A has a monitor connected to it.
I want to access this GUI via X11 forwarding on my Arch Linux client machine. (machine B)
What works:
GUI works locally on machine A (with /tmp/.X11-unix being mounted in the Docker container).
X11 forwarding of any app running outside of docker (X11 forwarding is set up and running properly for non-docker usage).
I can even switch the user while remotely logged in, copy the .Xauthority file to the other user and X11 forwarding works as well.
Some setup info:
Docker networking is 'bridged'.
Container can reach host (firewall is open).
DISPLAY variable is set in container (to host-ip-addr:10.0 because of TCP port 6010 where sshd is listening).
Packets to X forward port (6010) are reaching the host from the container (tcpdump checked).
What does not work:
X11 forwarding of the Docker app
Errors:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: host-ip-addr:10.0
Things i tried:
starting client ssh with ssh -Y option on machine B
putting "X11ForwardTrusted yes" in ssh_config on machine B
xhost + (so allow any clients to connect) on machine B
putting Host * in ssh_config on machine B
putting X11UseLocalhost no in sshd_config on machine A (to allow non-localhost clients)
Adding the X auth token in the container with xauth add from the login user on machine A
Just copying over the .Xauthority file from a working user into the container
Making shure .Xauthority file has correct permissions and owner
How can i just disable all the X security stuff and get this working?
Or even better: How can i get it working with security?
Is there at least a way to enable extensive debugging to see where exactly the problem is?
Alternative: The first answer below shows how to effectively resolve this issue. However: I would recommend you to look into a different approach all together, namely VNC. I personally switched to a tigerVNC setup that replaces the X11 forwarding and have not looked back. The performance is just leagues above what X11 forwarding delivered for me. There might be some instances where you cannot use VNC for whatever reason, but i would try it first.
The general setup is now as follows:
-VNC server runs on machine A on the host (not inside a docker container).
-Now you just have to figure out how to get a GUI for inside a docker container (which is a much more trivial undertaking).
-If the docker container was started NOT from the VNC environment, the DISPLAY variable maybe needs ajdusting.
Thanks so much #Lazarus535
I found that for me adding the following to my docker command worked:
--volume="$HOME/.Xauthority:/root/.Xauthority:rw"
I found this trick here
EDIT:
As Lazarus pointed out correctly you also have to set the --net=host option to make this work.
Ok, here is the thing:
1) Log in to remote machine
2) Check which display was set with echo $DISPLAY
3) Run xauth list
4) Copy the line corresponding to your DISPLAY
5) Enter your docker container
6) xauth add <the line you copied>*
7) Set DISPLAY with export DISPLAY=<ip-to-host>:<no-of-display>
*so far so good right?
This was nothing new...however here is the twist:
The line printed by xauth list for the login user looks something like this (in my case):
<hostname-of-machine>/unix:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
Because i use the bridged docker setup, the X forwarding port is not listening locally, because the sshd is not running in the container. Change the line above to:
<ip-of-host>:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
In essence: Remove the /unix part.
<ip-of-host> is the IP address where the sshd is running.
Set the DISPLAY variable as above.
So the error was that the DISPLAY name in the environment variable was not the "same" as the entry in the xauth list / .Xauthority file and the client could therefor not authenticate properly.
I switched back to an untrusted X11 forwarding setting.
The X11UseLocalhost no setting in the sshd_config file however is important, because the incomming connection will come from a "different" machine (the docker container).
This works in any scenario.
Install xhost if you don't have it. Then, in bash,
export DISPLAY=:0.0
xhost +local:docker
After this run your docker run command (or whatever docker command you are running) with -e DISPLAY=$DISPLAY
It works usually via https://stackoverflow.com/a/61060528/429476
But if you are running docker with a different user than the one used for ssh -X into the server with; then copying the Xauthority only helped along with volume mapping the file.
Example - I sshed into the server with alex user.Then ran docker after su -root and got this error
X11 connection rejected because of wrong authentication.
After copying the .XAuthoirty file and mapping it like https://stackoverflow.com/a/51209546/429476 made it work
cp /home/alex/.Xauthority .
docker run -it --network=host --env DISPLAY=$DISPLAY --privileged \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /tmp/.X11-unix:/tmp/.X11-unix --rm <dockerimage>
More details on wiring here https://unix.stackexchange.com/a/604284/121634
Some clarifying remarks. Host is A, local machine is B
Ive edited this post to note things that I think should work in theory but haven't been tested, vs things I know to work
Running docker non-interactively
If your docker is running not interactively and running sshd, you can use jumphosts or proxycommand and specify the x11 client to run. You should NOT volume share your Xauthority file with the container, and sharing -e DISPLAY likely has no effect on future ssh sessions
Since you essentially have two sshd servers, either of the following should work out of the box
if you have openssh-client greater than version 7.3, you can use the following command
ssh -X -J user-on-host#hostmachine,user-on-docker#dockercontainer xeyes
If your openssh client is older, the syntax is instead
(google says the -X is not needed in the proxy command, but I am suspicious)
ssh -X -o ProxyCommand="ssh -W %h:%p user-on-host#hostmachine" user-on-docker#dockermachine xeyes
Or ssh -X into host, then ssh -X into docker.
In either of the above cases, you should NOT share .Xauthority with the container
Running docker interactively from within the ssh session
The easiest way to get this done is to set --net=host and X11UseLocalhost yse.
If your docker is running sshd, you can open a second ssh -X session on your local machine and use the jumphost method as above.
If you start it in the ssh session, you can either -e DISPLAY=$DISPLAY or export it when you're in. You might have to export it if you attach to an exiting container where this line wasn't used.
Use these docker args for --net host and x11uselocalhost yes
ssh -X to host
-e DISPLAY=$DISPLAY
-v $HOME/.Xauthority:/home/same-as-dash-u-user/.Xauthority
-u user
What follows is explanation of how everything works and other approaches to try
About Xauthority
ssh -X/-Y set up a session key in the hosts Xauthority file, and then sets up a listen port on which it places an x11 proxy that uses the session key, and converts it to be compatible with the key on your local machine. By design, the .Xauthority keys will be different between your local machine and the host machine. If you use jumphosts/proxycommand the keys between the host and the container will yet again be different from each other. If you instead use ssh tunnels or direct X11 connection, you will have to share the host Xauthority with the container, in the case of sharing .Xauthority with the container, you can only have one active session per user, since new sessions will invalidate the previous ones by modifying the hosts .Xauthority such that it only works with that session's ssh x11 proxy
X11UserLocalhost no theory##
Even Though X11UseLocalhost no causes the x server to listen on the wildcard address, With --net host I could not redirect the container display to localhost:X.Y where x and why are from the host $DISPLAY
X11UseLocalhost yes is the easy way
If you choose X11UseLocalhost yes the DISPLAY variable on the host becomes localhost:X:Y, which causes the ssh x11 proxy to listen only on localhost port x.
If X11UseLocalhost is no, the DISPLAY variable on the host becomes the host's hostname:X:Y, which causes the xerver to listen on 0.0.0.0:6000+X and causes xclients to reach out over the network to the hostname specified.
this is theoretical, I don't yet have access to docker on a remote host to test this
But this is the easy way. We bypass that by redirecting the DISPLAY variable to always be localhost, and do docker port mapping to move the data from localhost:X+1.Y on the container, to localhost:X.Y on the host, where ssh is waiting to forward x traffic back to the local machine. The +1 makes us agnostic to running either --net=host or --net=bridge
setting up container ports requires specifying expose in the dockerfile and publishing the ports with the -p command.
Setting everything up manually without ssh -X
This works only with --net host. This approach works without xauth because we are directly piping to your unix domain socket on the local machine
ssh to host without -X
ssh -R6010:localhost:6010 user#host
start docker with -e DISPLAY=localhost:10.1 or export inside
in another terminal on local machine
socat -d -d TCP-LISTEN:6010,fork UNIX-CONNECT:/tmp/.X11-unix/X0
In original terminal run xclients
if container is net --bridged and you can't use docker ports, enable sshd on the container and use the jumphosts method

View Activemq Messages with Jolokia and Hawt.io

Though browsing several websites and here on stack overflow, there seems to be a way to view the messages in an Activemq queue using Jolokia and Hawt.io, but I have been unsuccessful to this point.
We are running our Activemq (version 5.12.0) as in embedded service in our Spring Webapp and exposed the Jolokia web services as explained in this webpage:
https://jolokia.org/reference/html/agents.html#agent-war-programmatic
When looking that the Jolokia web services via Hawt.io, I can not figure out how to actually view the messages in the queue.
Here is a screenshot showing the queue size:
So, how can I view the messages in an Activemq queue using Jolokia and Hawt.io?
The solution we ended up going with didn't actually use Jolokia or Hawt.io.
We ended up using Jconsole.
When looking at ActiveMQ queues, if you used a java serialized object in the queue, the data won't be very readably, but if you serialize your object to json, it is quite easy to see what is in the queue.
It is terribly important to read these directions all the way though, carefully.
These instructions discuss SSH Tunneling and it is quite easy to mess something up and there are not very good log messages when things go wrong.
Remote Debugging
Due to security reasons, we have closed all the open debug ports on our remote virtual machines.
To get remote debugging to work, we will need to use SSH Tunneling to access the remote virtual machine debugging ports.
Remote Application Setup
The application that you want to remotely debug must have the JPDA Transport connector enabled.
After Java 1.4, to enable the JPDA Transport, add the following vm parameter when starting your java virtual machine:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=<remote_port_number>
The above attributes are hard to describe, but what is presented above works well. More information about the above attributes can be found on the Connection and Invocation Details page.
Local IDE Setup
In Intellij to connect to a remote java virtual machine, open the "Run/Debug Configurations" window.
Then select a new "Remote" configuration.
Enter the following values:
Debugger mode
Attach to remote JVM
Host
localhost
Port
<local_port_number>*
Use module classpath
<local_package>**
The <port_number> should be the local port number of the ssh tunneling session that you will be starting. It is recommended that the <remote_port_number> and the <local_port_number> are the same value.
** This value should be whatever your local project is named.
SSH Tunneling
To actually connect to the remote debugging port, we'll need to use SSH Tunneling.
Run the following command via a terminal command line:
$ ssh -L <local_port_number>:localhost:<remote_port_number> -f <username>#<remote_server_name> -N
Example:
$ ssh -L 10001:localhost:10001 -f <your_username>#<your.server.com> -N
This command does the following:
Starts an ssh session with the <remote_server_name>.
Connects your <local_port_number> to the <remote_port_number> of the localhost of the remote machine. In this case, we're saying connect to localhost:10001 of the <your.server.com> machine.
Start remote debugging in the Intellij IDE and you should then be connected to the remote java virtual machine.
Resources
Intellij IDEA remotely debug java console program
Remote debug of a Java App using SSH tunneling (without opening server ports)
Remote JMX
We use JMX to look at the Spring Integration Kaha DB Queues.
Remote Application Setup
Add the following vm parameters:
-Dcom.sun.management.jmxremote.port=64250
-Dcom.sun.management.jmxremote.rmi.port=64250
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=127.0.0.1
The jmxremote.port and jmxremote.rmi.port can be any number and they can be different values, it just helps if they are the same value when doing the ssh tunneling below.
SSH Tunneling
$ ssh -L 64250:localhost:64250 -f <your_username>#<your.server.com> -N
JConsole Setup
This is done in a new terminal window.
$ jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=64250 service:jmx:rmi:///jndi/rmi://127.0.0.1:64250/jmxrmi
Resources
Why Java opens 3 ports when JMX is configured?
Clean Up
To close the ssh processes above:
$ lsof -i tcp | grep ^ssh
Then perform a kill on the process id.
Using jps and jstack to Help Debug
List all java processes running on a machine:
$ sudo jps
List the threads of an application running:
$ sudo -u <process_owner> jstack <process_id>
Example:
$ sudo -u tomcat jstack <pid>