how to disable rsh and rlogin service on debian 7.8 - iptables

I want to disable rsh and rlogin. I have disable 513 and 154 port in iptables,but I still can login with rsh on another host.I'm working on Debian7.8 32bit
my /etc/inetd.conf is here
# /etc/inetd.conf: see inetd(8) for further informations.
#
# Internet superserver configuration database
#
#
# Lines starting with "#:LABEL:" or "#<off>#" should not
# be changed unless you know what you are doing!
#
# If you want to disable an entry so it isn't touched during
# package updates just comment it out with a single '#' character.
#
# Packages should modify this file by using update-inetd(8)
#
# <service_name> <sock_type> <proto> <flags> <user> <server_path> <args>
#
#:INTERNAL: Internal services
#discard stream tcp nowait root internal
#discard dgram udp wait root internal
#daytime stream tcp nowait root internal
#time stream tcp nowait root internal
#:STANDARD: These are standard services.
#:BSD: Shell, login, exec and talk are BSD protocols.
#:MAIL: Mail, news and uucp services.
#:INFO: Info services
#:BOOT: TFTP service is provided primarily for booting. Most sites
# run this only on machines acting as "boot servers."
#:RPC: RPC based services
#:HAM-RADIO: amateur-radio services
#:OTHER: Other services
#<off># netbios-ssn stream tcp nowait root /usr/sbin/tcpd /usr/sbin/smbd
swat stream tcp nowait.400 root /usr/sbin/tcpd /usr/sbin/swat
#<off># sane-port stream tcp nowait saned:saned /usr/sbin/saned saned

Modify the disable field in the configuration file /etc/xinetd.d/rlogin and /etc/xinetd.d/rsh
Then use the command service xinetd restart to restart the xinetd service

Related

GitLab CI runner with SSH ProxyJump

I have the following settings in my /etc/ssh/ssh_config file:
Host serverA
User idA
Host serverB
User idB
ProxyJump serverA
I’ve also copied the public keys, so if I locally run ssh serverB I’m correctly connected to serverB as idB through serverA.
Now, here’s my runner configuration in /etc/gitlab-runner/config.toml:
[[runners]]
name = "ssh-runner-1"
url = "http://my-cicd-server"
token = "xxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.ssh]
user = "idB"
host = "serverB"
identity_file = "/home/gitlab-runner/.ssh/id_ed25519"
When I run a CI/CD job on this runner I get a « connection refused » error:
ERROR: Preparation failed: ssh command Connect() error: ssh Dial() error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
I conclude that the ProxyJump configuration is not applied, and since the machine with the runner can’t directly connect to serverB, I get denied access.
How can I configure the runner to apply the proxy jump configuration?
The GitLab runner uses a Go-based SSH client. It does not respect your system SSH configuration and does not have the same configurability as the standard ssh (usually OpenSSH) packages you typically find installed in operating system distributions or similar packages.
The Go client does not support the ProxyJump configuration.
Your best bet would probably be to configure a tunneled connection where your entrypoint does not require SSH configuration options that are not supported.
Local port forwarding
One way might be to open a local port-forwarding tunnel, then in your GitLab configuration, specify the host as localhost and port as the forwarded port.
For example:
Open the tunnel -- local port 2222 forwards to port 22 on ServerB via ssh connection through ServerA
ssh -L 2222:ServerB:22 -N ServerA
Configure runner to use the tunnel:
...
[runners.ssh]
host = "localhost"
port = 2222
...
With this approach, you may have to write some automation on your server to restore the tunnel connection in the event it is broken. How you might do this depends on your operating system and preferred service manager. Or use a tool like autossh
This is basically how the ProxyJump configuration works under the hood.
IP/Port forwarding system
A similar approach would be to have your jump system automatically forward connections to the desired destination. This might be something like a software firewall rule (e.g. using iptables routing rules). That way the forwarding occurs transparently. Then simply tell the runner to target ServerA and the traffic will be transparently moved to ServerB.
This approach is more reliable, since you won't have to do anything to keep the tunnel alive if it ever drops. However, the configuration is much more complex and requires a static IP for ServerB.
For example, on ServerA, assuming the IP of ServerB is 10.10.10.10 the following iptables configuration could be used:
iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.10.10.10:22
iptables -t nat -A POSTROUTING -j MASQUERADE
reference.
Then the GitLab runner configuration:
...
[runners.ssh]
host = "ServerA"
port = 2222
...
Lastly, it may also be useful to know that disable_strict_host_key_checking is an undocumented configuration option for the runner as well, in the event you need this.

How do you set up Caddy Server to work within WSL?

Context
I'm trying to learn how to use secure cookies using a very simple nodejs server that returns basic html over localhost:3000. I was told I would need to set up a reverse proxy in order to get SSL working so you can best have your development environment mimic that of what production would be like. I realize that I can learn how secure cookies work over localhost without a reverse proxy, but I wanted the challenge and to learn how to get SSL set up in a development environment.
Setup
I personally prefer to develop in WSL (WSL 2, Ubuntu-20.04) so naturally I set up the node server in WSL along with Caddy Server with the following configuration provided by the levelup tutorials course I'm using to teach me about web authentication. I ran Caddy Server by running caddy run in the directory I had the following file.
{
local_certs
}
nodeauth.dev {
reverse_proxy 127.0.0.1:3000
}
The following were the startup logs for the Caddy Server
2021/07/01 00:53:18.253 INFO using adjacent Caddyfile
2021/07/01 00:53:18.256 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile", "line": 2}
2021/07/01 00:53:18.258 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2021/07/01 00:53:18.262 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003c5260"}
2021/07/01 00:53:18.281 INFO http server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2021/07/01 00:53:18.281 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2021/07/01 00:53:18.450 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2021/07/01 00:53:18.451 INFO http enabling automatic TLS certificate management {"domains": ["nodeauth.dev"]}
2021/07/01 00:53:18.452 INFO tls cleaning storage unit {"description": "FileStorage:/home/rtclements/.local/share/caddy"}
2021/07/01 00:53:18.454 INFO tls finished cleaning storage units
2021/07/01 00:53:18.454 WARN tls stapling OCSP {"error": "no OCSP stapling for [nodeauth.dev]: no OCSP server specified in certificate"}
2021/07/01 00:53:18.456 INFO autosaved config (load with --resume flag) {"file": "/home/rtclements/.config/caddy/autosave.json"}
2021/07/01 00:53:18.456 INFO serving initial configuration
I also added 127.0.0.1 nodeauth.dev to the hosts file in WSL at /etc/hosts. Below is the resulting file.
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 MSI.localdomain MSI
192.168.99.100 docker
127.0.0.1 nodeauth.dev
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Problem
Interestingly enough, I was able to hit the node server via my browser by navigating to localhost:3000 (as expected), and was able to hit the Caddy Server by navigating to localhost:2019. The following was the log outputted by Caddy Server when I hit it.
2021/07/01 00:53:32.224 INFO admin.api received request {"method": "GET", "host": "localhost:2019", "uri": "/", "remote_addr": "127.0.0.1:34390", "headers": {"Accept":["*/*"],"User-Agent":["curl/7.68.0"]}}
I am not, however, able to see the html from my node server in my browser by navigating to nodeauth.dev. Neither am I seeing any output from running curl nodeauth.dev in my console in WSL, whereas I get the expected output when I run curl localhost:3000 also in WSL. Why is this?
I figured it had to do something with the hosts file on Windows not including this configuration. So I tried modifying that file to look like this, but I still couldn't get it to work.
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
192.168.99.100 docker
127.0.0.1 nodeauth.dev
I tried running a powershell script that I barely understand that I found from here, but that didn't work either, and I was no longer able to access localhost:3000. I'm guessing it does some form of port forwarding. Code below.
If (-NOT ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator"))
{
$arguments = "& '" + $myinvocation.mycommand.definition + "'"
Start-Process powershell -Verb runAs -ArgumentList $arguments
Break
}
# create the firewall rule to let in 443/80
if( -not ( get-netfirewallrule -displayname web -ea 0 )) {
new-netfirewallrule -name web -displayname web -enabled true -profile any -action allow -localport 80,443 -protocol tcp
}
$remoteport = bash.exe -c "ifconfig eth0 | grep 'inet '"
$found = $remoteport -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';
if( $found ){
$remoteport = $matches[0];
} else{
echo "The Script Exited, the ip address of WSL 2 cannot be found";
exit;
}
$ports=#(80,443);
iex "netsh interface portproxy reset";
for( $i = 0; $i -lt $ports.length; $i++ ){
$port = $ports[$i];
iex "netsh interface portproxy add v4tov4 listenport=$port connectport=$port connectaddress=$remoteport";
}
iex "netsh interface portproxy show v4tov4";
The only thing that worked was when I ran Caddy Server on Windows with the same configuration and changes to the hosts files as shown before. I'm not terribly sure what's going on here, but would any of y'all happen to know?

What ACL commands are required for Master-Replica synchronization in Redis 6?

When configuring Redis 6 with ACLs in a cluster environment an additional user must be created (assuming the default user is not desired or does not have access to the PSYNC command). What are the exact commands that must be assigned to this user?
There is a small note about ACL rules for Sentinel and Replicas in the documentation indicating that Sentinel needs:
AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF,
CONFIG, CLIENT, EXEC
and replicas need:
PSYNC, REPLCONF, PING
My best guess is to combine the two for a command set of:
AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF,
CONFIG, CLIENT, EXEC, PSYNC, REPLCONF
Excerpt from redis.conf which indicates "and/or other commands needed for replication":
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
masterauth mymasterpassword
#
# However this is not enough if you are using Redis ACLs (for Redis version
# 6 or greater), and the default user is not capable of running the PSYNC
# command and/or other commands needed for replication. In this case it's
# better to configure a special user to use with replication, and specify the
# masteruser configuration as such:
#
masteruser mymasteruser
#
# When masteruser is specified, the replica will authenticate against its
# master using the new AUTH form: AUTH <username> <password>.

Getting a lost Sentinel error message for Redis

I am running a spring boot service using spring data redis and here is the following configuration.
The service seems to work but I am seeing a stream of Lost Sentinel messages in the logs. The sentinel nodes are reachable form the VM where I am running the service. I was able to telnet to them directly from that VM. Any idea why this is happening?
spring:
profiles:
active: core-perf,swagger
default: core-perf,swagger
redis:
Pool: #Pool properties
# Max number of "idle" connections in the pool. Use a negative value to indicate
# an unlimited number of idle connections.
maxIdle: 8
# Target for the minimum number of idle connections to maintain in the pool.
# This setting only has an effect if it is positive.
minIdle: 0
# Max number of connections that can be allocated by the pool at a given time. Use a negative value for no limit.
maxActive: 8
# Maximum amount of time (in milliseconds) a connection allocation should block
# before throwing an exception when the pool is exhausted. Use a negative value
# to block indefinitely.
maxWait: -1
sentinel: #Redis sentinel properties.
master: mymaster
nodes: 10.202.56.209:26379, 10.202.56.213:26379, 10.202.58.80:26379
2015-06-15 17:30:54.896 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:30:59.894 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.56.213:26379. Sleeping 5000ms and retrying.
2015-06-15 17:30:59.897 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:04.975 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:04.976 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.56.213:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:09.976 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:09.976 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.56.213:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:15.054 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.56.213:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:15.055 ERROR 6677 --- [Thread-9] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.58.80:26379. Sleeping 5000ms and retrying.
2015-06-15 17:31:20.055 ERROR 6677 --- [Thread-8] redis.clients.jedis.JedisSentinelPool : Lost connection to Sentinel at 10.202.56.213:26379. Sleeping 5000ms and retrying.
We discovered the issue. There was a blank between the node pairs in the application.yml and once we removed this " " the Lost Sentinel log message disappeared.
so from
nodes: 10.202.56.209:26379, 10.202.56.213:26379, 10.202.58.80:26379
to
nodes: 10.202.56.209:26379,10.202.56.213:26379,10.202.58.80:26379
It would probably be a good thing if the committers looked at this as it would seem to be somewhat mysterious for users.
I lost my head over this issue for 2 days until I decided to reinstall it again and then configure it via my old config files and then after replacing sentinel.conf file with the below text.
It finally worked.
# *** IMPORTANT ***
#
# By default Sentinel will not be reachable from interfaces different than
# localhost, either use the 'bind' directive to bind to a list of network
# interfaces, or disable protected mode with "protected-mode no" by
# adding it to this configuration file.
#
# Before doing that MAKE SURE the instance is protected from the outside
# world via firewalling or other means.
#
# For example you may use one of the following:
#
# bind 127.0.0.1 192.168.1.1
#
# protected-mode no
# port <sentinel-port>
# The port that this sentinel instance will run on
port 26379
# By default Redis Sentinel does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis-sentinel.pid when
# daemonized.
daemonize no
# When running daemonized, Redis Sentinel writes a pid file in
# /var/run/redis-sentinel.pid by default. You can specify a custom pid file
# location here.
pidfile /var/run/redis-sentinel.pid
# Specify the log file name. Also the empty string can be used to force
# Sentinel to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
# sentinel announce-ip <ip>
# sentinel announce-port <port>
#
# The above two configuration directives are useful in environments where,
# because of NAT, Sentinel is reachable from outside via a non-local address.
#
# When announce-ip is provided, the Sentinel will claim the specified IP address
# in HELLO messages used to gossip its presence, instead of auto-detecting the
# local address as it usually does.
#
# Similarly when announce-port is provided and is valid and non-zero, Sentinel
# will announce the specified TCP port.
#
# The two options don't need to be used together, if only announce-ip is
# provided, the Sentinel will announce the specified IP and the server port
# as specified by the "port" option. If only announce-port is provided, the
# Sentinel will announce the auto-detected local IP and the specified port.
#
# Example:
#
# sentinel announce-ip 1.2.3.4
# dir <working-directory>
# Every long running process should have a well-defined working directory.
# For Redis Sentinel to chdir to /tmp at startup is the simplest thing
# for the process to don't interfere with administrative tasks such as
# unmounting filesystems.
dir /tmp
# sentinel monitor <master-name> <ip> <redis-port> <quorum>
#
# Tells Sentinel to monitor this master, and to consider it in O_DOWN
# (Objectively Down) state only if at least <quorum> sentinels agree.
#
# Note that whatever is the ODOWN quorum, a Sentinel will require to
# be elected by the majority of the known Sentinels in order to
# start a failover, so no failover can be performed in minority.
#
# Replicas are auto-discovered, so you don't need to specify replicas in
# any way. Sentinel itself will rewrite this configuration file adding
# the replicas using additional configuration options.
# Also note that the configuration file is rewritten when a
# replica is promoted to master.
#
# Note: master name should not include special characters or spaces.
# The valid charset is A-z 0-9 and the three characters ".-_".
sentinel monitor mymaster 127.0.0.1 6379 2
# sentinel auth-pass <master-name> <password>
#
# Set the password to use to authenticate with the master and replicas.
# Useful if there is a password set in the Redis instances to monitor.
#
# Note that the master password is also used for replicas, so it is not
# possible to set a different password in masters and replicas instances
# if you want to be able to monitor these instances with Sentinel.
#
# However you can have Redis instances without the authentication enabled
# mixed with Redis instances requiring the authentication (as long as the
# password set is the same for all the instances requiring the password) as
# the AUTH command will have no effect in Redis instances with authentication
# switched off.
#
# Example:
#
# sentinel auth-pass mymaster MySUPER--secret-0123passw0rd
# sentinel auth-user <master-name> <username>
#
# This is useful in order to authenticate to instances having ACL capabilities,
# that is, running Redis 6.0 or greater. When just auth-pass is provided the
# Sentinel instance will authenticate to Redis using the old "AUTH <pass>"
# method. When also an username is provided, it will use "AUTH <user> <pass>".
# In the Redis servers side, the ACL to provide just minimal access to
# Sentinel instances, should be configured along the following lines:
#
# user sentinel-user >somepassword +client +subscribe +publish \
# +ping +info +multi +slaveof +config +client +exec on
# sentinel down-after-milliseconds <master-name> <milliseconds>
#
# Number of milliseconds the master (or any attached replica or sentinel) should
# be unreachable (as in, not acceptable reply to PING, continuously, for the
# specified period) in order to consider it in S_DOWN state (Subjectively
# Down).
#
# Default is 30 seconds.
sentinel down-after-milliseconds mymaster 30000
# IMPORTANT NOTE: starting with Redis 6.2 ACL capability is supported for
# Sentinel mode, please refer to the Redis website https://redis.io/topics/acl
# for more details.
# Sentinel's ACL users are defined in the following format:
#
# user <username> ... acl rules ...
#
# For example:
#
# user worker +#admin +#connection ~* on >ffa9203c493aa99
#
# For more information about ACL configuration please refer to the Redis
# website at https://redis.io/topics/acl and redis server configuration
# template redis.conf.
# ACL LOG
#
# The ACL Log tracks failed commands and authentication events associated
# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked
# by ACLs. The ACL Log is stored in memory. You can reclaim memory with
# ACL LOG RESET. Define the maximum entry length of the ACL Log below.
acllog-max-len 128
# Using an external ACL file
#
# Instead of configuring users here in this file, it is possible to use
# a stand-alone file just listing users. The two methods cannot be mixed:
# if you configure users here and at the same time you activate the external
# ACL file, the server will refuse to start.
#
# The format of the external ACL user file is exactly the same as the
# format that is used inside redis.conf to describe users.
#
# aclfile /etc/redis/sentinel-users.acl
# requirepass <password>
#
# You can configure Sentinel itself to require a password, however when doing
# so Sentinel will try to authenticate with the same password to all the
# other Sentinels. So you need to configure all your Sentinels in a given
# group with the same "requirepass" password. Check the following documentation
# for more info: https://redis.io/topics/sentinel
#
# IMPORTANT NOTE: starting with Redis 6.2 "requirepass" is a compatibility
# layer on top of the ACL system. The option effect will be just setting
# the password for the default user. Clients will still authenticate using
# AUTH <password> as usually, or more explicitly with AUTH default <password>
# if they follow the new protocol: both will work.
#
# New config files are advised to use separate authentication control for
# incoming connections (via ACL), and for outgoing connections (via
# sentinel-user and sentinel-pass)
#
# The requirepass is not compatable with aclfile option and the ACL LOAD
# command, these will cause requirepass to be ignored.
# sentinel sentinel-user <username>
#
# You can configure Sentinel to authenticate with other Sentinels with specific
# user name.
# sentinel sentinel-pass <password>
#
# The password for Sentinel to authenticate with other Sentinels. If sentinel-user
# is not configured, Sentinel will use 'default' user with sentinel-pass to authenticate.
# sentinel parallel-syncs <master-name> <numreplicas>
#
# How many replicas we can reconfigure to point to the new replica simultaneously
# during the failover. Use a low number if you use the replicas to serve query
# to avoid that all the replicas will be unreachable at about the same
# time while performing the synchronization with the master.
sentinel parallel-syncs mymaster 1
# sentinel failover-timeout <master-name> <milliseconds>
#
# Specifies the failover timeout in milliseconds. It is used in many ways:
#
# - The time needed to re-start a failover after a previous failover was
# already tried against the same master by a given Sentinel, is two
# times the failover timeout.
#
# - The time needed for a replica replicating to a wrong master according
# to a Sentinel current configuration, to be forced to replicate
# with the right master, is exactly the failover timeout (counting since
# the moment a Sentinel detected the misconfiguration).
#
# - The time needed to cancel a failover that is already in progress but
# did not produced any configuration change (SLAVEOF NO ONE yet not
# acknowledged by the promoted replica).
#
# - The maximum time a failover in progress waits for all the replicas to be
# reconfigured as replicas of the new master. However even after this time
# the replicas will be reconfigured by the Sentinels anyway, but not with
# the exact parallel-syncs progression as specified.
#
# Default is 3 minutes.
sentinel failover-timeout mymaster 180000
# SCRIPTS EXECUTION
#
# sentinel notification-script and sentinel reconfig-script are used in order
# to configure scripts that are called to notify the system administrator
# or to reconfigure clients after a failover. The scripts are executed
# with the following rules for error handling:
#
# If script exits with "1" the execution is retried later (up to a maximum
# number of times currently set to 10).
#
# If script exits with "2" (or an higher value) the script execution is
# not retried.
#
# If script terminates because it receives a signal the behavior is the same
# as exit code 1.
#
# A script has a maximum running time of 60 seconds. After this limit is
# reached the script is terminated with a SIGKILL and the execution retried.
# NOTIFICATION SCRIPT
#
# sentinel notification-script <master-name> <script-path>
#
# Call the specified notification script for any sentinel event that is
# generated in the WARNING level (for instance -sdown, -odown, and so forth).
# This script should notify the system administrator via email, SMS, or any
# other messaging system, that there is something wrong with the monitored
# Redis systems.
#
# The script is called with just two arguments: the first is the event type
# and the second the event description.
#
# The script must exist and be executable in order for sentinel to start if
# this option is provided.
#
# Example:
#
# sentinel notification-script mymaster /var/redis/notify.sh
# CLIENTS RECONFIGURATION SCRIPT
#
# sentinel client-reconfig-script <master-name> <script-path>
#
# When the master changed because of a failover a script can be called in
# order to perform application-specific tasks to notify the clients that the
# configuration has changed and the master is at a different address.
#
# The following arguments are passed to the script:
#
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
#
# <state> is currently always "failover"
# <role> is either "leader" or "observer"
#
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
# the old address of the master and the new address of the elected replica
# (now a master).
#
# This script should be resistant to multiple invocations.
#
# Example:
#
# sentinel client-reconfig-script mymaster /var/redis/reconfig.sh
# SECURITY
#
# By default SENTINEL SET will not be able to change the notification-script
# and client-reconfig-script at runtime. This avoids a trivial security issue
# where clients can set the script to anything and trigger a failover in order
# to get the program executed.
sentinel deny-scripts-reconfig yes
# REDIS COMMANDS RENAMING
#
# Sometimes the Redis server has certain commands, that are needed for Sentinel
# to work correctly, renamed to unguessable strings. This is often the case
# of CONFIG and SLAVEOF in the context of providers that provide Redis as
# a service, and don't want the customers to reconfigure the instances outside
# of the administration console.
#
# In such case it is possible to tell Sentinel to use different command names
# instead of the normal ones. For example if the master "mymaster", and the
# associated replicas, have "CONFIG" all renamed to "GUESSME", I could use:
#
# SENTINEL rename-command mymaster CONFIG GUESSME
#
# After such configuration is set, every time Sentinel would use CONFIG it will
# use GUESSME instead. Note that there is no actual need to respect the command
# case, so writing "config guessme" is the same in the example above.
#
# SENTINEL SET can also be used in order to perform this configuration at runtime.
#
# In order to set a command back to its original name (undo the renaming), it
# is possible to just rename a command to itself:
#
# SENTINEL rename-command mymaster CONFIG CONFIG
# HOSTNAMES SUPPORT
#
# Normally Sentinel uses only IP addresses and requires SENTINEL MONITOR
# to specify an IP address. Also, it requires the Redis replica-announce-ip
# keyword to specify only IP addresses.
#
# You may enable hostnames support by enabling resolve-hostnames. Note
# that you must make sure your DNS is configured properly and that DNS
# resolution does not introduce very long delays.
#
SENTINEL resolve-hostnames no
# When resolve-hostnames is enabled, Sentinel still uses IP addresses
# when exposing instances to users, configuration files, etc. If you want
# to retain the hostnames when announced, enable announce-hostnames below.
#
SENTINEL announce-hostnames no

Flood protection for Jetty server?

I'm looking for a solution to prevent a Jetty server to be taken down by a DDoS or similar. Currently the servlets will open a new thread for each incomming connections, so 1 mio incomming connections will open 1 mio threads and Jetty will explode.
What's the best way to avoid this thread? I thought about putting an Apache between client and server, since the webserver has the abilities to limit incomming connections from one ip to e.g. 5 connections/second.
What do you think about my idea?
Kind Regards,
Hendrik
Jetty ships with a Quality of Service filter that should do what you want.
See http://wiki.eclipse.org/Jetty/Feature/Quality_of_Service_Filter
DosFilter can be used to provide DDoS protection.
To quote the description from the wiki,
The Denial of Service (DoS) filter limits exposure to request
flooding, whether malicious, or as a result of a misconfigured client.
The DoS filter keeps track of the number of requests from a connection
per second. If the requests exceed the limit, Jetty rejects, delays,
or throttles the request, and sends a warning message.
To enable you have to include the below in the configuration in the webapp's web.xml or jetty-web.xml
<filter>
<filter-name>DoSFilter</filter-name>
<filter-class>org.eclipse.jetty.servlets.DoSFilter</filter-class>
<init-param>
<param-name>maxRequestsPerSec</param-name>
<param-value>30</param-value>
</init-param>
</filter>
Check the wiki for customization.
Idea with serving new connections with org.eclipse.jetty.servlets.QoSFilter is good but i rather use typical anti ddos configuration, based on iptables (like in this article: http://blog.bodhizazen.net/linux/prevent-dos-with-iptables/).
sudo iptables -A INPUT -p tcp --dport 80 -m state --state NEW -m limit --limit 50/minute --limit-burst 200 -j ACCEPT
sudo iptables -A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 50/second --limit-burst 50 -j ACCEPT
In this case ddos protection is separated from app, and is more productive because extra packages will drop before accessing jetty.