Using snort/suricata, I want to generate an SSH alert for every failed login to my Home Network - ssh

I am setting up an Intrusion Detection System (IDS) using Suricata. I want to write a custom rule which will generate an alert whenever a failed login attempts occur to my virtual machine.
Example:
alert tcp any any -> $HOME_NET 22 (msg:"SSH Brute Force Attempt";flow:established,to_server;content:"SSH";nocase;offset:0; depth:4;detection_filter:track by_src, count 2, seconds 2;sid:2005; rev:1;)
I tried various combinations for SSH rule but not able to see any alerts in the Suricata Alerts section with multiple bad SSH attempts. (Bad attempts => using invalid password to generate alerts)
Kindly let me know how to go about this.

Since you are really attempting to look at the encrypted content (which is where the authentication and subsequent failure message will be), Snort/suricata isn't the ideal tool to use in the way that you describe. Instead, log monitoring would be a better approach.
There are other alternatives, however. You might look into Fail2Ban for automatic blocking at the IPTables level.
If you really want to do it with Snort/Suricata, you could use alert thresholds. For example:
alert tcp $EXTERNAL_NET any -> $HOME_NET 22 (msg:"Possible SSH brute forcing!"; flags: S+; threshold: type both, track by_src, count 5, seconds 30; sid:10000001; rev: 1;)
This tells Snort/Suricata to generate an alert on inbound connections (inbound packets with SYN set) when a threshold of 5 connections are seen from a single source in the space of 30 seconds. The threshold "both" indicates that it will not alert until this threshold is passed and that it will only generate one alert to notify you, rather than starting to inundate you with alerts.
Note that I have marked the flags as S+. Don't use just SYN. Remember that ECN has become a real "thing" and that you may find that the two bits that Snort/Suricata still call "Reserved" are set as a result of an ECN negotiation.

Related

How to set a loop forwarding mode from one NIC to another in DPDK testpmd?

Testpmd is running in a Hyper-V VM, and there are two NICs which connect to "internal virtual switch". I just want to test the availability of netvsc PMD.
./app/dpdk-testpmd -l 2,3 -- --total-num-mbufs=2048 -i --portmask=0x3 --port-topology=loop
I have used "start" or "start tx_first", and then used "show port stats all" to check. There are no Tx-packets or Rx-packets on two NICs.
Then I used "set fwd txonly", and I could find Tx-packets on two NICs, but it is not my want. So what steps can I do?
Typically, one wants to use a packet generator on the side that is opposite to a pair of ports harnessed by testpmd. Such a generator starts sending packets, whilst testpmd simply receives them on one port and transmits them back from the other one. This is what port-topology of type paired stands for, and this port-topology is used by default in testpmd. Another parameter, forward-mode, in turn, is set to io by default, which means that testpmd does not change the received packets before transmitting them back (in example, does not swap MAC addresses, etc.).
However, in your case there's no packet generator employed, and that means that testpmd must generate and send a batch of packets itself in order to kick-start forwarding. This is accomplished by specifying option --tx-first.
But apart from omitting option --tx-first you for some reason use option --port-topology=loop, which might be the reason behind your setup being non-functional. Variant loop means that packets received by a given port (say, Port 0) must be transmitted back from the very same port (that is, from Port 0). What you might want here is --port-topology=paired, which, as I stated before, is anyway used by default.
So, the short of it, you should probably try running testpmd as follows:
./app/dpdk-testpmd -l 2,3 -- --total-num-mbufs=2048 -i --portmask=0x3 --tx-first
Please note that this way forwarding is started automatically but you get no testpmd> prompt to enter command in. Should you wish to start forwarding automatically and, at the same time, get an interactive command prompt, please try running testpmd this way:
./app/dpdk-testpmd -l 2,3 -- --total-num-mbufs=2048 -i --portmask=0x3 --tx-first --auto-start -i
DPDK application testpmd is not a packet generator that will automatically generate and send Packets. But there is an option --tx-first which allows sending a burst (default 32) dummy packets from each interface.
Since your environment is physically connected this should work. But I highly recommend first check with the Linux driver whether ping or arp is able to reach the cross-connected interface first.
Note:
I highly recommend reading testpmd doc for more details
for enabling promiscus mode use option set promisc all on

Handling Telnet negotiation

I'm trying to implement Telnet Client using C++ and QT as GUI.
I have no idea to handling the telnet negotiations.
Every telnet command is preceded by IAC, e.g.
IAC WILL SUPPRESS_GO_AHEAD
The following is how I handling the negotiation.
Search for IAC character in received buffer
According to the command and option, response to the request
My questions are described as follows:
It seems that the telnet server won't wait for a client response after a negotiation command is sent.
e.g. (send two or more commands without waiting for client reponse)
IAC WILL SUPPRESS_GO_AHEAD
IAC WILL ECHO
How should I handle such situation? Handle two requests or just the last one?
What the option values would be if I don't response the request? Are they set as default?
Why IAC character(255) won't be treated as data instead of command?
Yes, it is allowed to send out several negotiations for different options without synchronously waiting for a response after each of them.
Actually it's important for each side to try to continue (possibly after some timeout if you did decide to wait for a response) even if it didn't receive a reply, as there are legitimate situations according to the RFC when there shouldn't or mustn't be a reply and also the other side might just ignore the request for whatever reason and you don't have control over that.
You need to consider both negotiation requests the server sent, as they are both valid requests (you may choose to deny one or both, of course).
I suggest you handle both of them (whatever "handling" means in your case) as soon as you notice them, so as not to risk getting the server stuck if it decides to wait for your replies.
One possible way to go about it is presented by Daniel J. Bernstein in RFC 1143. It uses a finite state machine (FSM) and is quite robust against negotiation loops.
A compliant server (the same goes for a compliant client) defaults all negotiable options to WON'T and DON'T (i.e. disabled) at the start of the connection and doesn't consider them enabled until a request for DO or WILL was acknowledged by a WILL or DO reply, respectively.
Not all servers (or clients for that matter) behave properly, of course, but you cannot anticipate all ways a peer might misbehave, so just assume that all options are disabled until enabling them was requested and the reply was positive.
I'll assume here that what you're actually asking is how the server is going to send you a byte of 255 as data without you misinterpreting it as an IAC control sequence (and vice versa, how you should send a byte of 255 as data to the server without it misinterpreting it as a telnet command).
The answer is simply that instead of a single byte of 255, the server (and your client in the opposite direction) sends IAC followed by another byte of 255, so in effect doubling all values of 255 that are part of the data stream.
Upon receiving an IAC followed by 255 over the network, your client (and the server in the opposite direction) must replace that with a single data byte of 255 in the data stream it returns.
This is also covered in RFC 854.

How to validate WebRTC connection signals when peers can't trust each other?

I am building a WebRTC app where two users are selected at random and then connect to each other to chat. Both clients keep an open WebSocket connection and I am planning to use this to exchange their offers/answers to signal a connection. The case I am trying to account for is when there is a peer that intentionally sends bad configuration information, and also when the peer might spontaneously disconnect in the middle of the signaling exchange.
My solution to the first case is have the server keep state of the exchange, so when the connection is first established I would expect that user A provide an offer and user B have an answer. Is this appropriate? or should this be implemented exclusively client side?
My solution to the second problem feels to me like a hack. What I am trying to do is notify the user that a match has been made and then the user will set a timeout say 20 seconds, if a connection hasn't been made in that amount of time then it should move on...
Are these appropriate solutions? How do you reliably establish a WebRTC when either peer can't be trusted? Should the signaling server be concerned with the state of the exchange?
Sounds like you're more concerned about call set up errors rather than being able to trust the identity of the remote peer. They are two very different problems.
Assuming it is the call set up errors you are concerned about you shouldn't be trying to avoid them you should be trying to make sure your application can handle them. Network connection issues are something that will always crop up and need to be handled.
Setting a timer for the establishment of a WebRTC call to complete is a logical solution. Displaying a warning to the user that the time limit is approaching also seems like a good idea. SIP is a signalling protocol and it has a defined timeout for the completion of a transaction and if it doesn't complete within that time it will generate an error response. You could use the same approach.

Cryptography: Verifying Signed Timestamps

I'm writing a peer to peer network protocol based on private/public key pair trust. To verify and deduplicate messages sent by a host, I use timestamp verification. A host does not trust another host's message if the signed timestamp has a delta (to the current) of greater than 30 seconds or so.
I just ran into the interesting problem that my test server and my second client are about 40 seconds out of sync (fixed by updating ntp).
I was wondering what an acceptable time difference would be, and if there is a better way of preventing replay attacks? Supposedly I could have one client supply a random text to hash and sign, but unfortunately this wont work as in this situation I have to write messages once.
A host does not trust another host's message if the signed timestamp has a delta (to the current) of greater than 30 seconds or so.
Time based is notoriously difficult. I can't tell you the problems I had with mobile devices that would not or could not sync their clock with the network.
Counter based is usually easier and does not DoS itself.
I was wondering what an acceptable time difference would be...
Microsoft's Active Directory uses 5 minutes.
if there is a better way of preventing replay attacks
Counter based with a challenge/response.
I could have one client supply a random text to hash and sign, but unfortunately this wont work as in this situation I have to write messages once...
Perhaps you could use a {time,nonce} pair. If the nonce has not been previously recorded, then act on the message if its within the time delta. Then hold the message (with {time,nonce}) for a windows (5 minutes?).
If you encounter the same nonce again, don't act on it. If you encounter an unseen nonce but its out of the time delta, then don't act on it. Purge your list of nonces on occasion (every 5 minutes?).
I'm writing a peer to peer network protocol based...
If you look around, then you will probably find a protocol in the academic literature.

ClientAliveInterval to prevent ssh session freezing / disconnecting?

After having my VPS upgraded to CentOs 5.5, I began to experience frozen / disconnected shell sessions if I had neglected them for a certain amount of time. Very annoying. The solution I found was to edit /etc/ssh/sshd_config and set the ClientAliveInterval to the desired number of seconds. My understanding is that this essentially substitutes for activity from the client user (me) and so keeps the session from disconnecting.
Having initiated a shell session after making this minor change, I seem to be able to maintain a neglected session. However, just because a thing seems to be working doesn't mean the best, or even correct, approach was necessarily taken.
Is there a better / different way to prevent a shell session from freezing?
The ClientAliveInterval value can increase the sshd timeout, you can try the following command as well
echo "TMOUT=300 >> /etc/bashrc
echo "readonly TMOUT" >> /etc/bashrc
echo "export TMOUT" >> /etc/bashrc
No guarantees, but this is what I recently started using on the server. 10 seconds seems like a short time, but I don't trust my phone to keep the connection alive. I suppose you could increase the number of seconds till the problem starts again, then dial it back.
ClientAliveInterval 10
Sets a timeout interval in seconds after which if no data has been received from the client, sshd(8) will send a message through the encrypted channel to request a response from the client. The default is 0, indicating that these messages will not be sent to the client.
ClientAliveCountMax 200
If it fails, keep trying for about 30 minutes. In other words, keep trying 200x, every 10 seconds. My logic could be flawed though, depending on what happens after 10 seconds. Assuming the client is quiet (like maybe I'm reading) does the "alive" message reset the max count if successful? Is inactivity considered failure? Or is failure a "no acknowledgement" of the alive message? Until I know the answer, I figure it's safe to repeat 200x.
Similar question here, and some decent recommendations...
https://unix.stackexchange.com/questions/400427/when-to-use-clientaliveinterval-versus-serveraliveinterval