After running fine for a while, I am getting write error on my redis instance:
(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
In the log I see:
9948:C 22 Mar 20:49:32.241 # Failed opening the RDB file root (in server root dir /var/spool/cron) for saving: Read-only file system
However, my redis config file is /etc/redis/redis.conf as confirmed by:
redis-cli -p 6379 info | grep 'config_file'
config_file:/etc/redis/redis.conf
And there I have:
dir /mnt/data/redis
And indeed, there is a snapshot there.
But despite the above, redis now thinks my data directory is
redis-cli -p 6379 CONFIG GET dir
1) "dir"
2) "/var/spool/cron"
Corresponding to the error I was getting as quoted above.
Can anyone tell me why/how my data directory is changing after redis starts, such that it is no longer what is specified in the config file?
So the answer is that the redis server was hacked and the configuration changed, which is very easy to do as it turns out. (I should point out that I had no reason to think it wasn't easy to do. I just assumed security by obscurity was sufficient in this case--wrong. No matter, this was just a playground not any sort of production server).
So don't open your redis port to the world. Use security groups if on AWS to limit access to machines that need it, or use AUTH (which is still not awesome because then all clients need to know the single password which also apparently gets sent in the clear), or have some middleware controlling access.
Hacking redis is easy to do, can compromise your data, and even enable unauthorized SSH access to your server. And that's why you shouldn't highline.
Related
I am trying to read the configuration of the running Redis instance. I want to better understand how Redis is configured, especially in regard to persistence settings.
I have successfully connected to the running Redis instance (SSH tunnel) and try to execute the following command:
CONFIG GET *
CONFIG GET appendonly
However, I get the message
ERR unknown command 'CONFIG'
If I invoke the command "CONFIG GET" without any parameters I get the message
Invalid input argument for command: 'CONFIG GET', passed 0 arguments, must be in range 1 - 1
So the command is known. Seems to be a permission issue!? Is there a way to get the configuration?
The current Redis offering (march 2019) has the following settings for persistency:
appendonly yes
appendfsync everysec
It runs with 2 replicas.
Please note that this allies to the current service offering of Swisscom and might change in the future.
I recently configured Redis to use AOF as well as RDB snapshotting.
However, it does not look like the AOF is replayed correctly on server startup.
I stopped the service. Then I made sure /var/redis/appendonly.aof is valid using redis-check-aof.
Then I started the server again. In this moment, the RDB file was empty. That's another issue I need to look into - Redis started losing all the data from time to time.
In the log file I can see the AOF is supposed to be loaded correctly:
DB loaded from append only file: 1.474 seconds
However, when I try to read a value which I know should be there, I get nothing:
127.0.0.1:6379> get iQube:Live:wordCount:2015:11:13:10:6
(nil)
In the AOF though, there are commands like this:
INCRBY
$36
iQube:Live:wordCount:2015:11:13:10:6
$1
2
*2
$4
Is there something else I need to do to make this work?
My fault. I did not secure the server properly and became target of probably the most typical attack to Redis. In effect, the AOF file contained flushall commands which wiped the DB clean upon loading.
At the very least, I recommend putting these three lines to redis.conf:
rename-command CONFIG someverylongandveryunguessablestring
rename-command FLUSHDB ""
rename-command FLUSHALL ""
I get the following error, whenever I execute any commands that modify data in redis
Redis is configured to save RDB snapshots, but is currently not able to persist on disk.
Commands that may modify the data set are disabled.
Please check Redis logs for details about the error.
I installed redis using brew on mac. How can I get the location of log files where redis-server logs information to. I tried looking for redis conf. file, but couldn't find it either.
What is the default location of [1] redis conf file [2] redis log file.
How do I get rid of the above error, and be able to execute commands that modify data in redis.
When installing with brew the logfile is set to stdout. You need to edit /usr/local/etc/redis.conf and change logfile to something else. I set mine to:
logfile /var/log/redis-server.log
You'll also make sure the user that runs redis has write permissions to the logfile, or redis will simply fail to launch completely. Then just restart redis:
brew services restart redis
After restarting it'll take a while for the error to show up in the logs, because it happens after redis fails its timed flushes. You should be seeing something like:
[7051] 29 Dec 02:37:47.164 # Background saving error
[7051] 29 Dec 02:37:53.009 * 10 changes in 300 seconds. Saving...
[7051] 29 Dec 02:37:53.010 * Background saving started by pid 7274
[7274] 29 Dec 02:37:53.010 # Failed opening .rdb for saving: Permission denied
After a brew install it attempts to save to /usr/local/var/db/redis/ and since redis is probably running as your current user and not root, it can't write to it. Once redis has permission to write to the directory, your logfile will say:
[7051] 29 Dec 03:08:59.098 * 1 changes in 900 seconds. Saving...
[7051] 29 Dec 03:08:59.098 * Background saving started by pid 8833
[8833] 29 Dec 03:08:59.099 * DB saved on disk
[7051] 29 Dec 03:08:59.200 * Background saving terminated with success
and the stop-writes-on-bgsave-error error will no longer get raised.
So I guess it is a bit late for adding an answer here but since I wondered on your question as I had the same error. I got it solved by changing my redis.conf 's dir variable like this:
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /root/path/to/dir/with/write/access/
The default value is: ./, so depending on how you launch your redis server you might not be able to save snapshots.
Hope it helps someone !
In my case i resolved this issue with below steps
Cause : By default redis store data # ./ and if redis runs with redis user this means redis will not be able to write data in ./ file then you will face above error.
Resolution :
Step # 1 (Enter a valid location where redis can do write operations)
root#fpe:/var/lib/redis# vim /etc/redis/redis.conf
dir /var/lib/redis # ( This location must have right for redis user to write)
Step # 2 (Connect to redis cli and map directory to write and issue below variable)
127.0.0.1:6379> CONFIG SET dir "/var/lib/redis"
127.0.0.1:6379> BGSAVE -
This will enable redis to write data on dump file.
Was going through the github discussion and the proposed solution is
to run
config set stop-writes-on-bgsave-error no
in the redis-cli.
here's the link
https://github.com/redis/redis/issues/584#issuecomment-11416418
Steps to fix this error:
Go to redis cli by typing redis-cli
127.0.0.1:6379> config set stop-writes-on-bgsave-error no
after that try to set key value
127.0.0.1:6379> set test_key 'Test Value'
127.0.0.1:6379> get test_key
"Test Value"
Check the following places:
/usr/local/Cellar/redis...
/usr/local/var/log/redis.log
/usr/local/etc/redis.conf
This error often indicates an issue with write permissions, make sure you're RDB directory is writable.
It is usually because permission limits. In my case, it's redis disabled write options.
You can try to run redis-cli in the shell, and then run the following command:
set stop-writes-on-bgsave-error yes
I've set up the EC2 instance couple days ago and even last night I was able to SSH to it with no problems. Today morning, I can't ssh to it. Port 22 is already open in the security group and I haven't changed anything since last night.
Error:
ssh: connect to host [ip address] port 22: Connection refused
I had similar issue recently and i couldn't figure out why it was happening, so I had to create a new instance, set it up again, and connect and configure all EBS storages to the new one. Took me couple hours... and now it's happening again. In the previous one, I've installed denyhost, which might have blocked me, but in the current one, there are only apache2, and mysql running.
The current instance has been up for 16 hours now, so I don't think it's because it didn't finish booting... Also, port 22 is open to all sources (0.0.0.0/0) and is using tcp protocol.
Any ideas?
Thanks.
With the help of #abhi.gupta200297, we were able to resolve it.
The issue was the error in /etc/fstab, and sshd was supposed to be started after fstab is successful. But it wasn't, hence, the sshd wouldn't start and that's why it was refusing the connection. Solution was to create a temporary instance, mount the root EBS from the original instance, and comment out stuff from the fstab and voila, it's letting me connect again. And for the future, I just stopped using fstab and created bunch of shell commands to mount the EBS volumes to directories and added them in /etc/init.d/ebs-init-mount file and then run update-rc.d ebs-init-mount defaults to initialize the file and I'm no longer having issues with locked ssh.
UPDATE 4/23/2015
Amazon team created a video tutorial of similar issue and show how to debug using this method: https://www.youtube.com/watch?v=_P29ZHu_feU
Looks like sshd might have stopped for some reason. Is the instance EBS backed? if thats the case, try shutting it down and starting it back up. That should solve the problem.
Also, are you able to ssh from AWS web console? They have a java plugin there to ssh into the instance.
For those of you who came across this post because you are unable to SSH into your EC2 instance after a reboot, this is cross-posted to a similar question at serverfault:
From the AWS Developer Forum post on this topic:
Try stopping the broken instance, detaching the EBS volume, and
attaching it as a secondary volume to another instance. Once you've
mounted the broken volume somewhere on the other instance, check the
/etc/sshd_config file (near the bottom). I had a few RHEL instances
where Yum scrogged the sshd_config inserting duplicate lines at the
bottom that caused sshd to fail on startup because of syntax errors.
Once you've fixed it, just unmount the volume, detach, reattach to
your other instance and fire it back up again.
Let's break this down, with links to the AWS documentation:
Stop the broken instance and detach the EBS (root) volume by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped.
Start a new instance in the same region and of the same OS as the broken instance then attach the original EBS root volume as a secondary volume to your new instance. The commands in step 4 below assume you mount the volume to a folder called "data".
Once you've mounted the broken volume somewhere on the other instance,
check the "/etc/sshd_config" file for the duplicate entries by issuing these commands:
cd /etc/ssh
sudo nano sshd_config
ctrl-v a bunch of times to get to the bottom of the file
ctrl-k all the lines at the bottom mentioning "PermitRootLogin without-password" and "UseDNS no"
ctrl-x and Y to save and exit the edited file
#Telegard points out (in his comment) that we've only fixed the symptom. We can fix the cause by commenting out the 3 related lines in the "/etc/rc.local" file. So:
cd /etc
sudo nano rc.local
look for the "PermitRootLogin..." lines and delete them
ctrl-x and Y to save and exit the edited file
Once you've fixed it, just unmount the volume,
detach by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped,
reattach to your other instance and
fire it back up again.
This happened to me on a Red Hat EC2 instance because these two lines were being automatically appended the end of the /etc/ssh/sshd_config file every time I launched my instance:
PermitRootLogin without-passwordUseDNS no
One of these append operations was done without a line break, so the tail of the sshd_config file looked like:
PermitRootLogin without-passwordUseDNS noPermitRootLogin without-passwordUseDNS no
That caused sshd to fail to start on the next launch. I think this was caused by the bug reported here: https://bugzilla.redhat.com/show_bug.cgi?id=956531 The solution was to remove all the duplicate entries at the bottom of the sshd_config file, and add extra line breaks at the end.
Go to your AWS management console > select instance > right click and select "Get System Logs"
This will list what went wrong.
Had the same issue, but sys logs had this:
Starting sshd: /var/empty/sshd must be owned by root and not group or world-writable.
[FAILED]
Used the same steps described above to detach volume and attach to connectable instance. Then used:
sudo chmod 755 /var/empty/sshd
sudo chown root:root /var/empty/sshd
(https://support.microsoft.com/en-us/help/4092816/ssh-fails-because-var-empty-sshd-is-not-owned-by-root-and-is-not-group)
Then detached and reattached to original EC2 Instance and could now access via ssh.
I got similar ssh locked out by detach an EBS but forgot to modify the /etc/fstab
If your ubuntu has systemd, you can edit /lib/systemd/system/local-fs.target and comment out the last two lines:
#OnFailure=emergency.target
#OnFailureJobMode=replace-irreversibly
I haven't tested this extensively and don't know if there are any risks or side effects involved, but so far it works like a charm. It mounts the root volume and all other volumes (except those that are misconfigured, obviously), then it continues the boot process until SSH is up, so you can connect to the instance and fix the incorrect fstab entries.
In my case, the volume was out of space and a service was failing to start. I used the AWS tutorial (from Sherzod's post) to mount it on a good EC2 instance and clean it up and remove the service from startup before remounting it and verifying that things worked.
For me it was that my IP had changed. Hope this helps someone. Navigate to the security groups and update your My IP in the inbound rules.
I had the same issue not able to connect to the aws instance with permission denied error.
I was able to connect with aws team on screen share call and they guided me to change the folder permission on the aws instance using the following user meta script.
steps :
stop the instance
Actions > Instance setting > Edit user meta
Enter the below script and save
**Content-Type: multipart/mixed; boundary="//" MIME-Version: 1.0
--// Content-Type: text/cloud-config; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config cloud_final_modules:
[scripts-user, always]
--// Content-Type:
text/x-shellscript; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash chown root:root /home chmod 755 /home chmod 700 /home/ubuntu chmod 700 /home/ubuntu/.ssh chmod 600 /home/ubuntu/.ssh/authorized_keys ls -ld /home /home/ubuntu /home/ubuntu/.ssh /home/ubuntu/.ssh/authorized_keys chown ubuntu:ubuntu /home/ubuntu -R
--//**
save and connect to the instance with correct pem key.
Resolved my problem
*change ubuntu to your instance username
I've been doing some tests with mongodb and sharding and at some point I tried to add new config servers to my mongos router (at that time, I was playing with just one config server). But I couldn't find any information on how to do this.
Have anybody tried to do such a thing?
Unfortunately you will need to shutdown the entire system.
Shutdown all processes (mongod, mongos, config server).
Copy the data subdirectories (dbpath tree) from the config server to the new config servers.
Start the config servers.
Restart mongos processes with the new --configdb parameter.
Restart mongod processes.
From: http://www.mongodb.org/display/DOCS/Changing+Config+Servers
Use DNS CNAMES
make sure to use DNS entries or at least /etc/hosts entries for all mongod and mongo config servers when you set-up multiple config servers in /etc/mongos.conf , when you set-up replica sets and/or sharding.
e.g. a common pitfall on AWS is to use just the private DNS name of EC2 instances, but these can change over time... and when that happens you'll need to shut down your entire mongodb system, which can be extremely painful to do if it's in production.
The Configure Sharding and Sample Configuration Session pages appear to have what you're looking for.
You must have either 1 or 3 config servers; anything else will not work as expected.
You need to dump and restore content from your original config server to 2
new config servers before adding them to mongos's --configdb.
The relevant section is:
Now you need a configuration server and mongos:
`$ mkdir /data/db/config
$ ./mongod --configsvr --dbpath /data/db/config --port 20000 > /tmp/configdb.log &
$ cat /tmp/configdb.log
$ ./mongos --configdb localhost:20000 > /tmp/mongos.log &
$ cat /tmp/mongos.log `
mongos does not require a data directory, it gets its information from the config server.