I installed redis on my computer and opened 1 redis-server and 2 redis-cli. If I type "shutdown save" command in the first redis-cli terminal, it will close both the server and the first redis-cli. Then, the second redis-cli won't be able to communicate with redis-server anymore because it has already been shutdown by the other redis-cli. It just doesn't make sense to me. IMO, a server is a standalone service and should always be running. A client should be able to connect/disconnect with a server but never disable a server. Why would Redis allow a client to disable a server which could be shared by many other clients? Consider if the redis server is on a remote machine and the redis clients are on other machines, wouldn't it be very dangerous since if one of the clients shut down the remote server then all other clients will be influenced?
If you don't want clients to execute the SHUTDOWN command (or any other for that matter), you can use the rename-command configuration directive.
As of the upcoming Redis v6, ACL are expected to provide a better degree of control over admin and application command.
No, I think you are getting it wrong. It's application responsibility to allow/disallow certain specific action on remote server. You can simply disallow certain commands so that single cli cannot take down the redis-server.
Related
Suppose my machine has a SSH daemon running. How do I get its fingerprint without having to create a network connection?
Meaning to say, without the use of ssh-keyscan.
Those keys (returned by ssh-keyscan), are detailed here, are in /etc/ssh/*.pub of the server.
So unless you have a program on the server ready to send them to you on its own, some kind of network connection would be required to fetch them.
I have been working on a http server which accepts connections and then based on the host name, loads up the right project from .so, generates the page the client is asking for, then sends them back.
Now that I have several working projects, I am interested in making them available to others but here is my problem :
I am connecting to my dedicated server through ssh, and starting my daemon from there, but after a while, the pages are no longer accessible because my program is no longer running.
I also get kicked by the server after a while. I wonder :
How do I keep my server running ? Does the fact that I keep getting kicked out by ssh after a little idle time explains why my daemon is being shutdown ?
Thanks in advance to whoever will be able to give me some element of answer.
When your SSH session times out SIGHUP was sent to the sub-processes forked from the current interactive shell. That's why the processes were terminated (server no longer running).
To avoid idle SSH connection being kicked by the server, set the ServerAliveInterval to send a request for response from server (e.g. ~/.ssh/config)
Host *
ServerAliveInterval 30
To avoid shell sub-process termination, refer to
https://askubuntu.com/questions/348836/keep-the-running-processes-alive-when-disconneting-the-remote-connection/348921#348921
https://askubuntu.com/questions/349262/run-a-nohup-command-over-ssh-then-disconnect
In short, there are 3 options:
nohup
disown / setsid
start the servers in CLI in tmux or screen session on the server
NOTE: If the server instances are already properly daemonized, try looking at monit or supervisord to keep them running ;-D
I want to install a number of raspberry pis at remote locations and be able to log in to them remotely. (Will begin with 30-40 boxes and hopefully grow to 1000 individual raspberry pis soon.)
I need to be able to remotely manage these boxes. Going the easier route, forwarding a port on the router and setting a DHCP reservation, requires either IT support from the company we'll be doing the install for (many of which don't have IT), or it will require one of our IT people physically installing each box.
My tentative solution is to have each box create a reverse SSH tunnel to our server. My question is: How feasible would this be? How easy would it be to manage that many connections? Would it be an issue for a small local server to have 1000+ concurrent SSH connections? Is there an easier solution to this problem?
My end goal is to be able to ship someone a box, have them plug it in, and be able to access it.
Thanks,
w
An alternate solution would be to:
Install OpenVPN server on your server machine. How to install OpenVPN Server on the PI. Additionally, add firewall rules that block everything but traffic directed for the client's ssh and other services ports (if desired), from administrating machine(s).
Run OpenVPN clients on your Raspberry PI client machines. They will connect back to your VPN server. On a side note, the VPN server and administrating machine(s) need not be the same machine if resources are limited on the VPN server. How to install OpenVPN on the client Raspberry PIs.
SSH from administrating machine(s) to each client machine. Optionally, you could use RSA authentication to simplify authentication.
Benefits include encryption for the tunnel including ssh encryption for administrating, as well as being able to monitor other services on their respective ports.
I made a WebApp to manage this exact same setting in about 60 minutes with my java web template. All I can share are some scripts that I use to list the connection and info about them. You can use those to build your own app, it is really simple to display this in some fancy way in a fast web.
Take a look at my scripts: https://unix.stackexchange.com/a/625771/332669
Those will allow you to get the listening port, as well as the public IPs they're binded from. With that you can easilly plan a system where everything is easilly identificable with a simple BBDD.
You might find this docker container useful https://hub.docker.com/r/logicethos/revssh/
RabbitMQ starts up just fine, but the shovel plugin status is listed as "starting".
I'm using the following rabbitmq.config:
Each broker is running on a separate AWS instance. The remote server is windows 2008 server, the local server is Amazon Linux.
[{rabbitmq_shovel,
[{shovels,
[{scrape_request_shovel,
[{sources, [{broker,"amqp://test_user:test_password#localhost"}]},
{destinations, [{broker, "amqp://test_user:test_password#ec2-###-##-###-###.compute-1.amazonaws.com"}]},
{queue, <<"scp_request">>},
{ack_mode, on_confirm},
{publish_properties, [{delivery_mode, 2}]},
{publish_fields, [{exchange, <<"">>},
{routing_key, <<"scp_request">>}]},
{reconnect_delay, 5}
]}
]
}]
}].
Running the following command:
sudo rabbitmqctl eval 'rabbit_shovel_status:status().'
returns:
[{scrape_request_shovel,starting,{{2012,7,11},{23,38,47}}}]
According to This question, this can result if the users haven't been set up correctly on the two brokers. However, I've double-checked that I've set up the users correctly via rabbitmqctl user_add on both machines -- have even tried it with a different set of users, to be sure.
I also ran an nmap scan of port 5672 on the remote host to verify is was up and running on that port.
UPDATE Problem isn't solved but this does appear to be a result of connection problems with the remote server. I changed "reconnect_delay" to 0 in my config file, to avoid having shovel infinitely re-try the connection. Highly recommend others with this problem do this as well, as it allows you to get error messages out of rabbit_shovel_status. In my case I got the following error:
[{scrape_request_shovel,
{terminated,
{{badmatch,{error,access_refused}},
[{rabbit_shovel_worker,make_conn_and_chan,1},
{rabbit_shovel_worker,handle_cast,2},
{gen_server2,handle_msg,2},
{proc_lib,init_p_do_apply,3}]}},
{{2012,7,12},{0,4,37}}}]
Answering my own question here, in case others encounter this issue. This error (and also a timeout error if you get it, {{badmatch,{error,etimedout}}, ), is almost certainly a communications problem between the two machines, most likely due to port access / firewall settings.
There were a couple of dumb things I was doing here:
1) Was using the wrong DNS for my remote EC2 instance (D'oh! really dumb -- can't tell you how long I spent banging my head against the wall on this one...). Remember that stopping and starting your instance generates a new DNS, if you don't have an elastic IP associated with the instance.
2) My remote instance is a windows server, and I realized you have to open up port 5672 both in windows firewall and in EC2 security groups -- there are two overlapping levels of access controls here, and opening up the port in the EC2 management console isn't sufficient if your machine is windows server on EC2, as you also have to configure the windows server firewall.
I am able to issue commands to my EC2 instances via SSH and these commands logs answers which I'm supposed to keep watching for a long time. The bad thing is that SSH command is closed after some time due to my inactivity and I'm no longer able to see what's going on with my instances.
How can I disable/increase timeout in Amazon Linux machines?
The error looks like this:
Read from remote host ec2-50-17-48-222.compute-1.amazonaws.com: Connection reset by peer
You can set a keep alive option in your ~/.ssh/config file on your computer's home dir:
ServerAliveInterval 50
Amazon AWS usually drops your connection after only 60 seconds of inactivity, so this option will ping the server every 50 seconds and keep you connected indefinitely.
Assuming your Amazon EC2 instance is running Linux (and the very likely case that you are using SSH-2, not 1), the following should work pretty handily:
Remote into your EC2 instance.
ssh -i <YOUR_PRIVATE_KEY_FILE>.pem <INTERNET_ADDRESS_OF_YOUR_INSTANCE>
Add a "client-alive" directive to the instance's SSH-server configuration file.
echo 'ClientAliveInterval 60' | sudo tee --append /etc/ssh/sshd_config
Restart or reload the SSH server, for it to recognize the configuration change.
The command for that on Ubuntu Linux would be..
sudo service ssh restart
On any other Linux, though, the following is probably correct..
sudo service sshd restart
Disconnect.
logout
The next time you SSH into that EC2 instance, those super-annoying frequent connection freezes/timeouts/drops should hopefully be gone.
This is also helps with Google Compute Engine instances, which come with similarly annoying default settings.
Warning: Do note that TCPKeepAlive settings (which also exist) are subtly, yet distinctly different from the ClientAlive settings that I propose above, and that changing TCPKeepAlive settings from the default may actually hurt your situation rather than help.
More info here: http://man.openbsd.org/?query=sshd_config
Consider using screen or byobu and the problem will likely go away. What's more, even if the connection is lost, you can reconnect and restore access to the same terminal screen you had before, via screen -r or byobu -r.
byobu is an enhancement for screen, and has a wonderful set of options, such as an estimate of EC2 costs.
I know for Putty you can utilize a keepalive setting so it will send some activity packet every so often as to not go "idle" or "stale"
http://the.earth.li/~sgtatham/putty/0.55/htmldoc/Chapter4.html#S4.13.4
If you are using other client let me know.
You can use Mobaxterm, free tabbed SSH terminal with below settings-
Settings -> Configuration -> SSH -> SSH keepalive
remember to restart Mobaxterm app after changing the setting.
I have a 10+ custom AMIs all based on Amazon Linux AMIs and I've never run into any timeout issues due to inactivity on a SSH connection. I've had connections stay open more than 24 hrs, without running a single command. I don't think there are any timeouts built into the Amazon Linux AMIs.