Remote debugging using gdbserver over ssh - ssh

I want to debug a process running on a remote box from my host box (I built the code on the host machine).
Both have linux type operating systems.
I seems I can only communicate to the remote box from the host box via ssh (I tested using telnet).
I have followed the following steps to set this up:
On the Remote box:
Stop the firewall service:
service firewall_service stop
Attach the process to gdbserver
--attach :remote_port process_id
On the Host box:
Set up port forwarding via ssh
sudo ssh remote_username#remote_ip -L host_port:localhost:remote_port
-f sleep 60m
Set up gdb to attach to a remote process:
gdb file.debug
(gdb) target remote remote_ip:remote_port
When I try to start the debugging on the host by running 'target remote remote_ip:remote_port' on the host box I get a 'Connection timedout' error.
Can you guys see anything I am doing wrong, anything to check or an alternative way to debug remotely over ssh I would be grateful.
Thanks

This command:
sudo ssh remote_username#remote_ip -L host_port:localhost:remote_port ...
forwards local host_port to remote_port on remote_ip's localhost. This is useful only if you could not just connect to remote_ip:remote_port directly (for example, if that port is blocked by firewall).
This command:
(gdb) target remote remote_ip:remote_port
asks GDB to connect to remote_port on remote_ip. But you said that you can only reach remote_ip via ssh, so it's not surprising that GDB times out.
What you want:
ssh remote_username#remote_ip -L host_port:localhost:remote_port ...
(gdb) target remote :host_port
In other words, you connect to local host_port, and ssh forwards that local connection to remote_ip:remote_port, where gdbserver is listening for it.

Related

X11 forwarding of a GUI app running in docker

First off: I have read the answers to similar questions on SO, but none of them worked.
IMPORTANT NOTE: The answer below is still valid, but maybe jump to the end for an alternative.
The situation:
App with GUI is running in a docker container (CentOS 7.1) under Arch Linux. (machine A)
Machine A has a monitor connected to it.
I want to access this GUI via X11 forwarding on my Arch Linux client machine. (machine B)
What works:
GUI works locally on machine A (with /tmp/.X11-unix being mounted in the Docker container).
X11 forwarding of any app running outside of docker (X11 forwarding is set up and running properly for non-docker usage).
I can even switch the user while remotely logged in, copy the .Xauthority file to the other user and X11 forwarding works as well.
Some setup info:
Docker networking is 'bridged'.
Container can reach host (firewall is open).
DISPLAY variable is set in container (to host-ip-addr:10.0 because of TCP port 6010 where sshd is listening).
Packets to X forward port (6010) are reaching the host from the container (tcpdump checked).
What does not work:
X11 forwarding of the Docker app
Errors:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: host-ip-addr:10.0
Things i tried:
starting client ssh with ssh -Y option on machine B
putting "X11ForwardTrusted yes" in ssh_config on machine B
xhost + (so allow any clients to connect) on machine B
putting Host * in ssh_config on machine B
putting X11UseLocalhost no in sshd_config on machine A (to allow non-localhost clients)
Adding the X auth token in the container with xauth add from the login user on machine A
Just copying over the .Xauthority file from a working user into the container
Making shure .Xauthority file has correct permissions and owner
How can i just disable all the X security stuff and get this working?
Or even better: How can i get it working with security?
Is there at least a way to enable extensive debugging to see where exactly the problem is?
Alternative: The first answer below shows how to effectively resolve this issue. However: I would recommend you to look into a different approach all together, namely VNC. I personally switched to a tigerVNC setup that replaces the X11 forwarding and have not looked back. The performance is just leagues above what X11 forwarding delivered for me. There might be some instances where you cannot use VNC for whatever reason, but i would try it first.
The general setup is now as follows:
-VNC server runs on machine A on the host (not inside a docker container).
-Now you just have to figure out how to get a GUI for inside a docker container (which is a much more trivial undertaking).
-If the docker container was started NOT from the VNC environment, the DISPLAY variable maybe needs ajdusting.
Thanks so much #Lazarus535
I found that for me adding the following to my docker command worked:
--volume="$HOME/.Xauthority:/root/.Xauthority:rw"
I found this trick here
EDIT:
As Lazarus pointed out correctly you also have to set the --net=host option to make this work.
Ok, here is the thing:
1) Log in to remote machine
2) Check which display was set with echo $DISPLAY
3) Run xauth list
4) Copy the line corresponding to your DISPLAY
5) Enter your docker container
6) xauth add <the line you copied>*
7) Set DISPLAY with export DISPLAY=<ip-to-host>:<no-of-display>
*so far so good right?
This was nothing new...however here is the twist:
The line printed by xauth list for the login user looks something like this (in my case):
<hostname-of-machine>/unix:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
Because i use the bridged docker setup, the X forwarding port is not listening locally, because the sshd is not running in the container. Change the line above to:
<ip-of-host>:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
In essence: Remove the /unix part.
<ip-of-host> is the IP address where the sshd is running.
Set the DISPLAY variable as above.
So the error was that the DISPLAY name in the environment variable was not the "same" as the entry in the xauth list / .Xauthority file and the client could therefor not authenticate properly.
I switched back to an untrusted X11 forwarding setting.
The X11UseLocalhost no setting in the sshd_config file however is important, because the incomming connection will come from a "different" machine (the docker container).
This works in any scenario.
Install xhost if you don't have it. Then, in bash,
export DISPLAY=:0.0
xhost +local:docker
After this run your docker run command (or whatever docker command you are running) with -e DISPLAY=$DISPLAY
It works usually via https://stackoverflow.com/a/61060528/429476
But if you are running docker with a different user than the one used for ssh -X into the server with; then copying the Xauthority only helped along with volume mapping the file.
Example - I sshed into the server with alex user.Then ran docker after su -root and got this error
X11 connection rejected because of wrong authentication.
After copying the .XAuthoirty file and mapping it like https://stackoverflow.com/a/51209546/429476 made it work
cp /home/alex/.Xauthority .
docker run -it --network=host --env DISPLAY=$DISPLAY --privileged \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /tmp/.X11-unix:/tmp/.X11-unix --rm <dockerimage>
More details on wiring here https://unix.stackexchange.com/a/604284/121634
Some clarifying remarks. Host is A, local machine is B
Ive edited this post to note things that I think should work in theory but haven't been tested, vs things I know to work
Running docker non-interactively
If your docker is running not interactively and running sshd, you can use jumphosts or proxycommand and specify the x11 client to run. You should NOT volume share your Xauthority file with the container, and sharing -e DISPLAY likely has no effect on future ssh sessions
Since you essentially have two sshd servers, either of the following should work out of the box
if you have openssh-client greater than version 7.3, you can use the following command
ssh -X -J user-on-host#hostmachine,user-on-docker#dockercontainer xeyes
If your openssh client is older, the syntax is instead
(google says the -X is not needed in the proxy command, but I am suspicious)
ssh -X -o ProxyCommand="ssh -W %h:%p user-on-host#hostmachine" user-on-docker#dockermachine xeyes
Or ssh -X into host, then ssh -X into docker.
In either of the above cases, you should NOT share .Xauthority with the container
Running docker interactively from within the ssh session
The easiest way to get this done is to set --net=host and X11UseLocalhost yse.
If your docker is running sshd, you can open a second ssh -X session on your local machine and use the jumphost method as above.
If you start it in the ssh session, you can either -e DISPLAY=$DISPLAY or export it when you're in. You might have to export it if you attach to an exiting container where this line wasn't used.
Use these docker args for --net host and x11uselocalhost yes
ssh -X to host
-e DISPLAY=$DISPLAY
-v $HOME/.Xauthority:/home/same-as-dash-u-user/.Xauthority
-u user
What follows is explanation of how everything works and other approaches to try
About Xauthority
ssh -X/-Y set up a session key in the hosts Xauthority file, and then sets up a listen port on which it places an x11 proxy that uses the session key, and converts it to be compatible with the key on your local machine. By design, the .Xauthority keys will be different between your local machine and the host machine. If you use jumphosts/proxycommand the keys between the host and the container will yet again be different from each other. If you instead use ssh tunnels or direct X11 connection, you will have to share the host Xauthority with the container, in the case of sharing .Xauthority with the container, you can only have one active session per user, since new sessions will invalidate the previous ones by modifying the hosts .Xauthority such that it only works with that session's ssh x11 proxy
X11UserLocalhost no theory##
Even Though X11UseLocalhost no causes the x server to listen on the wildcard address, With --net host I could not redirect the container display to localhost:X.Y where x and why are from the host $DISPLAY
X11UseLocalhost yes is the easy way
If you choose X11UseLocalhost yes the DISPLAY variable on the host becomes localhost:X:Y, which causes the ssh x11 proxy to listen only on localhost port x.
If X11UseLocalhost is no, the DISPLAY variable on the host becomes the host's hostname:X:Y, which causes the xerver to listen on 0.0.0.0:6000+X and causes xclients to reach out over the network to the hostname specified.
this is theoretical, I don't yet have access to docker on a remote host to test this
But this is the easy way. We bypass that by redirecting the DISPLAY variable to always be localhost, and do docker port mapping to move the data from localhost:X+1.Y on the container, to localhost:X.Y on the host, where ssh is waiting to forward x traffic back to the local machine. The +1 makes us agnostic to running either --net=host or --net=bridge
setting up container ports requires specifying expose in the dockerfile and publishing the ports with the -p command.
Setting everything up manually without ssh -X
This works only with --net host. This approach works without xauth because we are directly piping to your unix domain socket on the local machine
ssh to host without -X
ssh -R6010:localhost:6010 user#host
start docker with -e DISPLAY=localhost:10.1 or export inside
in another terminal on local machine
socat -d -d TCP-LISTEN:6010,fork UNIX-CONNECT:/tmp/.X11-unix/X0
In original terminal run xclients
if container is net --bridged and you can't use docker ports, enable sshd on the container and use the jumphosts method

Docker to run X applications while connected through SSH

I have used these instructions for Running Gui Apps with Docker to create images that allow me to launch GUI based applications.
It all works flawlessly when running Docker on the same machine, but it stops working when running it on a remote host.
Locally, I can run
docker --rm --ti -e DISPLAY -e <X tmp> <image_name> xclock
And I can get xclock running on my host machine.
When connecting remotely to a host with XForwarding, I am able to run X applications that show up on my local X Server, as anyone would expect.
However if in the remote host I try to run the above docker command, it fails to connect to the DISPLAY (usually localhost:10.0)
I think the problem is that the XForwarding is setup on the localhost interface of the remote host.
So the docker host has no way to connect to DISPLAY=localhost:10.0 because that localhost means the remote host, unreachable from docker itself.
Can anyone suggest an elegant way to solve this?
Regards
Alessandro
EDIT1:
One possible way I guess is to use socat to forward the remote /tmp/.X11-unix to the local machine. This way I would not need to use port forwarding.
It also looks like openssh 6.7 will natively support unix socket forwarding.
When running X applications through SSH (ssh -X), you are not using the /tmp/.X11-unix socket to communicate with the X server. You are rather using a tunnel through SSH reached via "localhost:10.0".
In order to get this to work, you need to make sure the SSH server supports X connections to the external address by setting
X11UseLocalhost no
in /etc/ssh/sshd_config.
Then $DISPLAY inside the container should be set to the IP address of the Docker host computer on the docker interface - typically 172.17.0.1. So $DISPLAY will then be 172.17.0.1:10
You need to add the X authentication token inside the docker container with "xauth add" (see here)
If there is any firewall on the Docker host computer, you will have to open up the TCP ports related to this tunnel. Typically you will have to run something like
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
if you use ufw.
Then it should work. I hope it helps. See also my other answer here https://stackoverflow.com/a/48235281/5744809 for more details.

Tunelling VNC through two ssh hops

I've long seeked a solution to tunnel to a machine behind a firewall, passing VNC (or other ports) through. Like explained in this old usenet post, which I'll recap here:
I have to log through an intermediate machine, something like:
local $ ssh interim
interim $ ssh remote
remote $ ...any commands...
This works fine. But now I am trying to tunnel a vnc session from remote to local and I can't find the magic incantation, using either one or two steps.
I recently found a wonderfully simple and adaptable solution: simply tunnel the ssh to the target system through the connection to the firewall. Like such:
local $ ssh -L 2222:remote:22 interim
interim $ ...no need to do anything here...
In another local console you connect to localhost on port 2222, which is actually your remote destination:
local $ ssh -C -p 2222 -L 5900:localhost:5900 localhost
remote $ ...possibly start you VNC server here...
In yet another local console:
local $ xtightvncviewer :0
It's that simple. You can add any port forwarding you want to the 2nd command (-L localport:localhost:remoteport) just like if there wasn't any intermediate firewall. For instance for RDP: -L 3389:localhost:3389

View Activemq Messages with Jolokia and Hawt.io

Though browsing several websites and here on stack overflow, there seems to be a way to view the messages in an Activemq queue using Jolokia and Hawt.io, but I have been unsuccessful to this point.
We are running our Activemq (version 5.12.0) as in embedded service in our Spring Webapp and exposed the Jolokia web services as explained in this webpage:
https://jolokia.org/reference/html/agents.html#agent-war-programmatic
When looking that the Jolokia web services via Hawt.io, I can not figure out how to actually view the messages in the queue.
Here is a screenshot showing the queue size:
So, how can I view the messages in an Activemq queue using Jolokia and Hawt.io?
The solution we ended up going with didn't actually use Jolokia or Hawt.io.
We ended up using Jconsole.
When looking at ActiveMQ queues, if you used a java serialized object in the queue, the data won't be very readably, but if you serialize your object to json, it is quite easy to see what is in the queue.
It is terribly important to read these directions all the way though, carefully.
These instructions discuss SSH Tunneling and it is quite easy to mess something up and there are not very good log messages when things go wrong.
Remote Debugging
Due to security reasons, we have closed all the open debug ports on our remote virtual machines.
To get remote debugging to work, we will need to use SSH Tunneling to access the remote virtual machine debugging ports.
Remote Application Setup
The application that you want to remotely debug must have the JPDA Transport connector enabled.
After Java 1.4, to enable the JPDA Transport, add the following vm parameter when starting your java virtual machine:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=<remote_port_number>
The above attributes are hard to describe, but what is presented above works well. More information about the above attributes can be found on the Connection and Invocation Details page.
Local IDE Setup
In Intellij to connect to a remote java virtual machine, open the "Run/Debug Configurations" window.
Then select a new "Remote" configuration.
Enter the following values:
Debugger mode
Attach to remote JVM
Host
localhost
Port
<local_port_number>*
Use module classpath
<local_package>**
The <port_number> should be the local port number of the ssh tunneling session that you will be starting. It is recommended that the <remote_port_number> and the <local_port_number> are the same value.
** This value should be whatever your local project is named.
SSH Tunneling
To actually connect to the remote debugging port, we'll need to use SSH Tunneling.
Run the following command via a terminal command line:
$ ssh -L <local_port_number>:localhost:<remote_port_number> -f <username>#<remote_server_name> -N
Example:
$ ssh -L 10001:localhost:10001 -f <your_username>#<your.server.com> -N
This command does the following:
Starts an ssh session with the <remote_server_name>.
Connects your <local_port_number> to the <remote_port_number> of the localhost of the remote machine. In this case, we're saying connect to localhost:10001 of the <your.server.com> machine.
Start remote debugging in the Intellij IDE and you should then be connected to the remote java virtual machine.
Resources
Intellij IDEA remotely debug java console program
Remote debug of a Java App using SSH tunneling (without opening server ports)
Remote JMX
We use JMX to look at the Spring Integration Kaha DB Queues.
Remote Application Setup
Add the following vm parameters:
-Dcom.sun.management.jmxremote.port=64250
-Dcom.sun.management.jmxremote.rmi.port=64250
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=127.0.0.1
The jmxremote.port and jmxremote.rmi.port can be any number and they can be different values, it just helps if they are the same value when doing the ssh tunneling below.
SSH Tunneling
$ ssh -L 64250:localhost:64250 -f <your_username>#<your.server.com> -N
JConsole Setup
This is done in a new terminal window.
$ jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=64250 service:jmx:rmi:///jndi/rmi://127.0.0.1:64250/jmxrmi
Resources
Why Java opens 3 ports when JMX is configured?
Clean Up
To close the ssh processes above:
$ lsof -i tcp | grep ^ssh
Then perform a kill on the process id.
Using jps and jstack to Help Debug
List all java processes running on a machine:
$ sudo jps
List the threads of an application running:
$ sudo -u <process_owner> jstack <process_id>
Example:
$ sudo -u tomcat jstack <pid>

Warning: remote port forwarding failed for listen port 52698

I'm using SSH to access my university's afs system. I like to use rmate (remote TextMate), which requires SSH tunneling, so I included this alias in my .bashrc.
alias sshr=ssh -R 52698:localhost:52698 username#corn.myschool.edu
It has always worked until now.
I had the same problem. In order to find the port that is already open, you have to issue this command on the 'corn.myschool.edu' computer:
sudo netstat -plant | grep 52698
And then kill all of the processes that come up with this (replace xxxx with the process ids)
sudo kill -9 xxxx
(UPDATED: changed the option to be -plant as it is a nice mnemonic)
I had another SSH connection open. I just needed to close that connection before I opened my SSH tunnel.
Further Explanation:
Once one ssh connection has been established, subsequent connections will produce a message:
Warning: remote port forwarding failed for listen port 52698
This message is harmless, as the forward can only be set up once and one forward will work for all ssh connections to the same machine. The original ssh session that opened the forward will stay open when you exit the shell until all remote editing sessions are finished.
I experienced this problem, but it was while connecting to a server on which I don't have sudo priviliges, so the top response suggesting runing sudo netstat ... wasn't feasible for me.
I eventually figured out it was because there were still instances of rmate running, so I used ps to list the running processes and then kill -9 pid (where pid is the process ID for rmate).
This solved my problem reported here as well. To avoid this notification "AllowTcpForwarding" should be enabled in SSH config.
In my case, the problem was that the remote system didn't have DNS properly set up, and it couldn't even resolve its own hostname. Make sure you have a working DNS in /etc/resolv.conf at the remote system.