I just wanted to know if memcache is linked to apache or it runs as a separate process. So, will restarting apache clear my memcache contents or not?
You can also empty memcached without restarting it:
telnet localhost 11211
11211 it the default port for memcached (if nothing responds on this port check your init script)
And within telnet:
flush_all
This will flush all stored data.
No. Memcache is not linked to the apache process. Memcached is a separate process.
HOWEVER, your application framework if you are using one, may flush memcache if you restart apache.
-daniel
restarting the server will flush all stored data
/etc/init.d/memcached restart
Related
I installed Redis Server on the cloud machine (Ubuntu 18.0) where it contains SSD.
In the configuration file, I changed the dir to /temp and the dbfilename to dump.rdb
I restarted the server and checked the runtime settings with CONFIG GET.
It is showing the values what I set in the redis.conf file.
After 6 Hours, I checked it again. The strange thing is, these values got changed dir=/var/spool/cron"
and dbfilename=root.
I am sure, nobody attacked my server and it is under our own VPN and not publicly accessible.
Now, I did one more test, I installed a Docker Container (Ubuntu 18.0) in that cloud instance (Same instance) and did the test in the container. There is no change in the configuration at runtime after couple of hours.
Also, suspect if the cloud machine is built with magnetic HDD redis seems working fine. If I built with SSD then redis not working after couple of hours.
Can anybody help in this regard.
Thanks
I had a similar situation on the my Redis server.
If your Redis server is accessible from the public network, It might be an attack.
In my case, I changed the default port of Redis servers and add password protection.
After that, the same situation does not happen.
Check the below issue in Redis GitHub, You can get more information about your case.
https://github.com/redis/redis/issues/3594
I installed redis on my computer and opened 1 redis-server and 2 redis-cli. If I type "shutdown save" command in the first redis-cli terminal, it will close both the server and the first redis-cli. Then, the second redis-cli won't be able to communicate with redis-server anymore because it has already been shutdown by the other redis-cli. It just doesn't make sense to me. IMO, a server is a standalone service and should always be running. A client should be able to connect/disconnect with a server but never disable a server. Why would Redis allow a client to disable a server which could be shared by many other clients? Consider if the redis server is on a remote machine and the redis clients are on other machines, wouldn't it be very dangerous since if one of the clients shut down the remote server then all other clients will be influenced?
If you don't want clients to execute the SHUTDOWN command (or any other for that matter), you can use the rename-command configuration directive.
As of the upcoming Redis v6, ACL are expected to provide a better degree of control over admin and application command.
No, I think you are getting it wrong. It's application responsibility to allow/disallow certain specific action on remote server. You can simply disallow certain commands so that single cli cannot take down the redis-server.
I am doing a customer demo where I need to stress out a sshd server with repeated sequential requests. So I wrote a small shell script with a loop in it. The first connection is successful, however the sshd refuses connection immediately after the first connection. So all my subsequent requests fail.
Right now the SSHD is running in a docker container and I am running the script from the host. So no external factor such as network proxy is in picture here.
So far I have checked the following things
The SSHD config file contains the following line (I bumped up the value)
MaxStartups 100:300:600
Checked everything here - http://edoceo.com/notabene/ssh-exchange-identification
Have been googling around for what could be the problem (too many links to post here). Any ideas?
Ok. So the SSHD daemon was being spawned in the debug mode. Therefore it could not fork. It would get killed after one connection. Tried putting it in the regular mode and now the test is flying :)
I was interested in working with apache http server based on next parameters:
On a single server running listenin in one single port
Having condigured several Virtualhosts, one per domain
running each Virtualhost as an instance listening in por 80
been able to reload one domain configuration without having to restart the rest.
I have doubts about the memory consumption and if there's, how should i improve it.
I don't think that would be a memory problem (correct me if I'm wrong) as soon as there's only one http server running?
or maybe yes because each instance comsumes independent memory?
should be same memory compsumption as running all the VirtuallHosts on the main apache config file?
Many thanks, I mainly want to run one instance per domain because I want to be able to restart each VirtualHost configuration when is needed without having to restart the others.
Thanx
First I don't think you can run several apache instance if they are all listening to port 80. Only one process can bind the port.
Apache will have several child processes, all child of the process listenign on port 80, but each child process can be used for any VirtualHost.
You could achieve it by binding different IP on port 80, so having IP based VirtualHosts. Or by using one Apache as a proxy for other Apache instances binded on other ports.
But the restart problem is not a real problem. Apache can perform safe-restart (reload on some distributions) where each child process is reloaded after the end of his running job. So it's a transparent restart, without any HTTP request killed. Adding or removing a VirtualHost does not need a restart, a simple reload is enought.
I have to think there are ways of achieving what you want without individual instances. Seriously large virtual hosting companies use apache, I am hard pressed to believe your needs are more complex than theirs. Example: http://httpd.apache.org/docs/2.0/vhosts/mass.html
Maybe you should run two apache servers to do a rolling restart when it is really needed, which would prevent any individual site from being down as well.
I am able to issue commands to my EC2 instances via SSH and these commands logs answers which I'm supposed to keep watching for a long time. The bad thing is that SSH command is closed after some time due to my inactivity and I'm no longer able to see what's going on with my instances.
How can I disable/increase timeout in Amazon Linux machines?
The error looks like this:
Read from remote host ec2-50-17-48-222.compute-1.amazonaws.com: Connection reset by peer
You can set a keep alive option in your ~/.ssh/config file on your computer's home dir:
ServerAliveInterval 50
Amazon AWS usually drops your connection after only 60 seconds of inactivity, so this option will ping the server every 50 seconds and keep you connected indefinitely.
Assuming your Amazon EC2 instance is running Linux (and the very likely case that you are using SSH-2, not 1), the following should work pretty handily:
Remote into your EC2 instance.
ssh -i <YOUR_PRIVATE_KEY_FILE>.pem <INTERNET_ADDRESS_OF_YOUR_INSTANCE>
Add a "client-alive" directive to the instance's SSH-server configuration file.
echo 'ClientAliveInterval 60' | sudo tee --append /etc/ssh/sshd_config
Restart or reload the SSH server, for it to recognize the configuration change.
The command for that on Ubuntu Linux would be..
sudo service ssh restart
On any other Linux, though, the following is probably correct..
sudo service sshd restart
Disconnect.
logout
The next time you SSH into that EC2 instance, those super-annoying frequent connection freezes/timeouts/drops should hopefully be gone.
This is also helps with Google Compute Engine instances, which come with similarly annoying default settings.
Warning: Do note that TCPKeepAlive settings (which also exist) are subtly, yet distinctly different from the ClientAlive settings that I propose above, and that changing TCPKeepAlive settings from the default may actually hurt your situation rather than help.
More info here: http://man.openbsd.org/?query=sshd_config
Consider using screen or byobu and the problem will likely go away. What's more, even if the connection is lost, you can reconnect and restore access to the same terminal screen you had before, via screen -r or byobu -r.
byobu is an enhancement for screen, and has a wonderful set of options, such as an estimate of EC2 costs.
I know for Putty you can utilize a keepalive setting so it will send some activity packet every so often as to not go "idle" or "stale"
http://the.earth.li/~sgtatham/putty/0.55/htmldoc/Chapter4.html#S4.13.4
If you are using other client let me know.
You can use Mobaxterm, free tabbed SSH terminal with below settings-
Settings -> Configuration -> SSH -> SSH keepalive
remember to restart Mobaxterm app after changing the setting.
I have a 10+ custom AMIs all based on Amazon Linux AMIs and I've never run into any timeout issues due to inactivity on a SSH connection. I've had connections stay open more than 24 hrs, without running a single command. I don't think there are any timeouts built into the Amazon Linux AMIs.