Difference between maxstartups and maxsessions in sshd_config - ssh

I want to limit the total number of ssh connections. I have gone through many sshd manuals. They just say that these two fields can be used
MaxStartups: the max number of concurrent unauthenticated connections to the SSH daemon
MaxSession: the max number of (multiplexed) open sessions permitted per TCP connection.
What is the contribution of both in calculating the total number of ssh connections?

The question is quite old and might be better suited to serverfault but it never got an answer beyond citing the man page. My answer is to complement the details of the man page by adding some context.
First of all, it should be noted that both settings are independent of each other they address different stages of the SSH connection.
MaxSessions
SSH allows session multiplexing aka opening many sessions (e.g. a shell, an sftp transfer and a raw command) at the same time using just one TCP connection. This saves the overhead of multiple TCP handshakes and multiple SSH authentications. The parameter MaxSessions allows to restrict this multiplexing to a certain number of sessions.
If you set MaxSessions 1 and have a shell open, you can still run an SFTP transfer or open a second shell but in the background SSH will open another TCP connection and authenticate again. (Use password authentication to make this visible).
If you set MaxSessions 0 you can make sure no one can open a session (a shell, SFTP or similar) but you can still connect to open a tunnel or ssh into the next host.
Checkout the ControlMaster section of ssh_config(5).
MaxSessions
Specifies the maximum number of open shell, login or subsystem
(e.g. sftp) sessions permitted per network connection. Multiple
sessions may be established by clients that support connection
multiplexing. Setting MaxSessions to 1 will effectively disable
session multiplexing, whereas setting it to 0 will prevent all
shell, login and subsystem sessions while still permitting for-
warding. The default is 10.
MaxStartups
When you connect to the remote SSH server, there is a time window between establishing the connection and successful authentication. This time frame can be very small, e.g. when you configure your SSH client to use a certain private key for this connection, or it can be long, when the client first tries three different SSH keys, aks you to enter a password and then waits for you to enter a 2nd factor auth code you get via text message. The sum of connections that are in this time frame at the same time are the "concurrent unauthenticated connections" mentioned on the man page cited. If there are too many of connections in this state, sshd stops accepting new ones. You can tweak MaxStartups to change when this happens.
A real world use case for changing the default is for example a jump host that is used by provisioning software like ansible. When asked to provision a lot of hosts behind the jump host, Ansible opens up many connections at the same time so it might run into this limit if connections are opened quicker than the SSH host is able to authenticate them.
MaxStartups
Specifies the maximum number of **concurrent unauthenticated con-
nections to the SSH daemon.** Additional connections will be
dropped until authentication succeeds or the LoginGraceTime
expires for a connection. The default is 10:30:100.
Alternatively, random early drop can be enabled by specifying the
three colon separated values ``start:rate:full'' (e.g.
"10:30:60"). sshd(8) will refuse connection attempts with a
probability of ``rate/100'' (30%) if there are currently
``start'' (10) unauthenticated connections. The probability
increases linearly and all connection attempts are refused if the
number of unauthenticated connections reaches ``full'' (60).

MaxSessions
Specifies the maximum number of open shell, login or subsystem
(e.g. sftp) sessions permitted per network connection. Multiple
sessions may be established by clients that support connection
multiplexing. Setting MaxSessions to 1 will effectively disable
session multiplexing, whereas setting it to 0 will prevent all
shell, login and subsystem sessions while still permitting for-
warding. The default is 10.
MaxStartups
Specifies the maximum number of **concurrent unauthenticated con-
nections to the SSH daemon.** Additional connections will be
dropped until authentication succeeds or the LoginGraceTime
expires for a connection. The default is 10:30:100.
Alternatively, random early drop can be enabled by specifying the
three colon separated values ``start:rate:full'' (e.g.
"10:30:60"). sshd(8) will refuse connection attempts with a
probability of ``rate/100'' (30%) if there are currently
``start'' (10) unauthenticated connections. The probability
increases linearly and all connection attempts are refused if the
number of unauthenticated connections reaches ``full'' (60).

Related

SSH client does not timeout when connection to server has been diconnected

I think there is a simple answer to this question, but everything I find online is about preventing SSH client connections from timing out.
In this case, the client has established a connection to the server, and remains connected. Then the connection is disrupted, say the ethernet cable is unplugged, or the router is powered off.
When this happens, the client connection is not dropped.
The ssh client connection is part of a script and the line that performs the ssh login looks like this:
ssh -Nn script#example.com
The .ssh/config contains the following parameters:
Host *
ServerAliveInterval 60
ServerAliveCountMax 2
When these disconnects occur, I'd like the client ssh connection to timeout, and allow the script to attempt reconnect...
Thanks!
I guess I was wrong about this being a simple question, since no one was able to provide an answer.
My further reading and asking led to one reply on the openssh IRC channel, around 2022-06-06. I was advised that the options:
ServerAliveInterval 60
ServerAliveCountMax 2
Often don't disconnect the client as one might expect.
The ssh_config man page:
ServerAliveCountMax
Sets the number of server alive messages (see below) which
may be sent without ssh(1) receiving any messages back from
the server. If this threshold is reached while server alive
messages are being sent, ssh will disconnect from the server,
terminating the session...
The default value is 3. If, for example, ServerAliveInterval
(see below) is set to 15 and ServerAliveCountMax is left at
the default, if the server becomes unresponsive, ssh will
disconnect after approximately 45 seconds.
Seems to pretty conclusively state that disconnecting on lack of server response is the intention of these parameters. However, in practice this doesn't happen in all cases. Maybe the caveat here is: "while server alive messages are being sent"?
If the application calls for a reliable client disconnect when the server becomes unresponsive, the advice was to implement an external method, separate from the ssh client login script, that monitors server responsiveness, and kills the ssh client process on timeout.

Forward Traffic on Port through SSH Reverse Tunnel

I have an interesting scenario. I've searched every where, and I have bits and pieces of information, however, I don't have the full picture, and it's driving me nuts.
I also want to mention I'm no where near sysadmin status, however, I can get around my infrastructure with enough to get the job done.
I've got 3 end points. I've got a device inside a network (endpoint#1), that's setup a reverse tunnel to one of my servers (endpoint#2). I've got another server that has to send requests (endpoint#3) to the device (endpoint#1) through the connection server (endpoint#2).
I'm currently able to sustain connections between endpoint#1 and endpoint#2, and send requests from endpoint#2 to endpoint#1 without issue, however, I need endpoint#3 to be able to talk to endpoint#1 through endpoint#2.
I've tried searching for port forwarding scenarios and reverse tunnel scenarios, however, whatever it is that I'm doing is not allowing network traffic through.
How can I set up http traffic to GET/POST from endpoint#3 to endpoint#2 and pass through to endpoint#1 through the specified reverse tunnel (on it's specified port)? HELP!
Found the answer. It's using roughly the same syntax that I'm using on SSH to setup the remote server, however, it's adding the binding ip address (interal ip address of the network that it's on) and using GatewayPorts clientspecified in the sshd_config (although, I'm not 100% I needed this - it is an option I set though).
On endpoint#1:
- ssh -R [endpoint#2.internal.ip.address]:[port]:[localhost]:[port-to-map-to-on-endpoint#1] user#endpoint#2
On endpoint#3:
- curl -X POST -d {data} http://endpoint#2.internal.ip.address/path/to/resource
This will then allow the call on endpoint#3 to be passed through to endpoint#1.

Can the GUI of an RDP session remain active after disconnect

I'm running automated testing procedures that emulates keystrokes and mouseclicks 24/7.
Although it runs fine locally, on an RDP session it stops running once minimized or disconnected. Apparently, the GUI doesn't exist if you can't physically see it on the screen.
There is a registry work-around for keeping the GUI active for minimizing the window, but I know of no way to keep it alive after disconnect.
Ideally, I would have this run on the server Windows console session which would not care about being disconnected but in a hosted environment (I tried Amazon and Go Daddy) there is no way to access the console session.
Does anyone know how I can get around this? Basically any solution that allows me to run my application on a VPS. I need the reliability of a host but the flexibility to run it as if I was sitting right in front.
Yes, you can.
There are two types of sessions in Windows: The "console" session which is always active, and there can only be a max of one of, and "terminal" sessions, a la RDP. Using "rdpwrap" on Github, you can have an unlimited number of terminal sessions.
RDP sessions will become "deactivated" when there is not a connection to them. Programs will still run, but anything that depends on GUI interaction will break badly.
Luckily, we may "convert" a terminal session into a console session instead of disconnecting from Remote Desktop normally by running the following command from inside the terminal session:
for /f "skip=1 tokens=3" %%s in ('query user %USERNAME%') do (tscon.exe %%s /dest:console)
This will disconnect you from the session, but it will still run with full graphical context. This answers your question. You can reconnect to it and it will become a terminal session again, and you can do this infinitely. And, of course, autohotkey works perfectly.
But, what if you need more than one persistent, graphics-enabled session?
To get an unlimited amount of graphics-persistent sessions, you can run Remote Desktop and start terminal sessions from within the "main" session described above. Normally Remote Desktop prevents this "loopback" behavior, but if you specify "127.0.0.2" for the destination, you will be able to start a terminal session with any number of the users on the remote machine.
The graphics-persistentness will only be present on terminal servers if they are not minimized, unless you create and set RemoteDesktop_SuppressWhenMinimized to 2 at the following registry location:
HKEY_LOCAL_MACHINE\Software\Microsoft\Terminal Server Client
With this you can get an unlimited number of completely independent graphics-persistent remote sessions from a single machine.
This could be a workaround, altough I have not tried it myself and it involves having another machine
Let's assume that at the moment you are creating a session to myserver.com
Local Client ----> myserver.com
Instead of doing that, you could try having a separate server (let's call it myslave.com) and use that to establish a session
Local Client ----> myslave.com ----> myserver.com
Then if you disconnect the Local Client ---> myslave.com session the GUI of the session between myslave.com ----> myserver.com should remain active.
It will work only if you are connected to the console session of myslave.com.
I found a similar way. I had same problem, i downloaded rdp wraper which allows you configure multiple session rpd server and one tool which is included (rdpchecker.exe) allows you connect to localhost so you can connect to your server from your server and you dont need that middle client.
This could be a workaround, altough I have not tried it myself and it involves having >another machine
Let's assume that at the moment you are creating a session to myserver.com
Local Client ----> myserver.com
Instead of doing that, you could try having a separate server (let's call it myslave.com) and use that to establish a session
Local Client ----> myslave.com ----> myserver.com
Then if you disconnect the Local Client ---> myslave.com session the GUI of the session
between myslave.com ----> myserver.com should remain active
If you are using a windows server you don't even need another machine.
1) Connect to the server with the remote desktop connection (#con1).
2) Create a new alias for your server system like "127.0.0.2" in Windows\System32\drivers\etc\hosts .
3) Now establish a new remote desktop connection from your windows server (in #con1) to itself (#con2).
4) Finally start your GUI needing application e.g. UI-Path in #con2 and then close #con1.
I ran into the same problem and noticed that using VNC (TightVNC) to take over the remote machine seems to solve the issue. I guess VNC uses the console screen. Once activated and logged-in it stays logged-in, also after a VNC disconnect. Make sure that the screen never turns off in the power options.
Take note that keeping the console logged-in on a VPS is in general not recommended.

Allow real user to login while a brute-force attack is active (and is being slowed by throttling)

Hypothetical scenario:
Let's say you have an attacker continuously attempt to log in to your website. After n attempts login throttling kicks in and the attacker is forced to wait an amount of time before their log in attempt is processed again. Since in this scenario we are talking about an automated attack, it is not smart enough to figure out that throttling is active and it keeps going on posting user/pass combos to the login form. Eventually the throttling timeout finishes and the attacker is able to post one more user/pass combo to the server which in turns activates the throttling for this user.
So this effectively creates a DoS situation for the real user because the automated attack makes sure that the login throttling is always active so that the real user can never log in.
How can this scenario be counteracted?
So far I have:
Whitelisting IPs that have been known to have successful login attempts. This doesn't work if the real user is logging in from a previously unknown IP.
Temporary blacklisting failed attempt IPs. This doesn't work if the attack is coming from different IPs.
Send Source-Quench upstream( aka ECN)
SAML (Security Assertions Markup Language)
Traceroute
Port Knocking & other "Two-factor" schemes
code to compare login packets -- iptables
Selinux/iptables-connmark
Connected to 6 above, "ip rule" (particularly mark & fwmark)
iptables-extensions: connlimit / hashlimit
CAPTCHA
Hint: see similar Can't Access Plesk Admin Because Of DOS Attack, Block IP Address Through SSH?
But defense in depth is critical.
Sincerely,
ArrowInTree

Replicate logmein.com behavior for smart devices

I have several smart devices that run Windows CE5 with our application written in .NETCF 3.5. The smart devices are connected to the internet with integrated GPRS modems. My clients would like a remote support option but VNC and similar tools doesn't seem to be able to do the job. I found several issues with VNC to get it to work. First it has severe performance issues when ran on the smart device. The second issue is that the internet provider has a firewall that blocks all incoming requests if they didn't originate from the smart device itself. Therefore I cannot initiate a remote desktop session with the smart devices since the request didn't originate from the smart device.
We could get our own APN however they are too expensive and the monthly cost is too great for the amount of smart devices we have deployed. It's more economical for us if we could add development costs to the initial product cost because our customers dislike high monthly costs and rather pay a large sum up front instead. A remote support solution would also allow us to minimize our onsite support.
That's why we more or less decided to roll our own remote desktop solution. We have code for capturing images on the smart device and only get the data that has changed since the last cycle. What we need is to make a communication solution like logmein.com (doesn't support WinCE5) where the smart devices connect to a server from which we then can stream the data to our support personnel's clients. Basically the smart device initiates a connection to our server and start delivering screen data when the server requests it. A support client connects to the server and gets a list of available streams and then select one to listen in on.
Any suggestions for how to do it considering we have to do the solution in .NETCF 3.5 on the smart devices? We have limited communication experience beyond simple soap web-services.
Since you're asking for a suggestion, I'll suggest this:
Don't reinvent. Reuse whatever you can. You can perform tunneling with SSH, so make an SSH connection (say, a port of PuTTY or plink, inside a loop) out via GPRS on your smart device; forward remote ports to local ports, bound to the SSH server's local address (127.0.0.1 (sshd):4567 => localhost (smart_device_01):4567). Your clients connect to your SSH server and access the assigned port for each device.
With that said, that's probably not the answer you're looking for. Below - the answer you're probably looking for.
Based on my analysis of how LogMeIn works, you'll want to make an HTTPS or TLS server where your smart devices will push data. Let's call it your tunnel server.
You'll probably want to spawn a new thread that repeatedly attempts to make connections to the tunnel server (outbound connections from smart device to the server, per your specified requirement). With a protocol like BEEP/BXXP, you can encapsulate and multiplex message-oriented or stream-oriented sessions. Wrap BXXP/BEEP into TLS, and tunnel through to your tunnel server. BEEP lets you multiplex streams onto one connection -- if you want the full capabilities of an in-house LogMeIn solution, you'll want to use something like this.
Once a connection is established, make a new BEEP session. With the new session, tell the tunnel server your system identification information (device name, device authentication signature). Write heartbeat data (timestamp periodically) into this new session.
Set up a callback (or another thread) which interfaces to your BEEP control session. Watch for a message requesting service. When such a request comes in, spawn the required threads to copy data from your custom remote-display protocol and push this data back through the same channel.
This sets the basic premise for your Smart Device's program. You can add functionality to this as you desire, say, to match what LMI's IT Reach subscription provides (remote registry, secure tunneled Telnet, remote filesystem, remote printing, remote sound... you get the idea)
I'll make some assumptions that you know how to properly secure all this stuff for authentication and authorization for your clients (Is user foo allowed to access smart device bar?).
On your tunnel server, start a server socket (listening for inbound connections, or from the perspective of smart devices, smart device outbound connections) that demultiplexes connections and sessions. Once a connection is opened, fire up BEEP and register a callback / start a thread to wait for the authentication/heartbeat session. Perform the required checks for AAA to smart devices -- are these devices allowed, are they known, how much does it cost, etc. Your tunnel server forwards data on behalf of your smart devices. For each BEEP session, attach a name (device name) to the BEEP session after the AAA procedures succeed; on failure, close the connection and let the AAA mechanism know (to block attackers). Your tunnel server should also set up what's required for interacting with the frontend -- that is, it should have the code to interact with BEEP to demultiplex the stream for your remote display data.
On your frontend server (can be the same box as the tunnel server), install the routine for AAA -- check if the user is known, if the user is allowed, how much the user should be charged, etc. Once all the checks are passed, make a secured connection from the frontend server to tunnel server. Get the device names that the tunnel server knows that the user is allowed to access. At this point, you should be able to get a "plaintext" stream, based on the device name, from the tunnel server. Forward this stream back to the user (via TLS, for example, or again via BEEP over TLS), or send the required configuration for your remote display client to connect to your tunnel server with the required parameters to access the remote display protocol's stream.