Redis server deleted after rebooting the machine - redis

I am working with redis 3.0.7 on Ubuntu. Everytime im doing reboot and then starting redis server using "nohup redis-server &" all the keys-values content is deleted.
I checked the snapshotting and in redis.conf and i have the default configuration of:
save 900 1
save 300 10
save 60 10000
The machine was up for a long time (months) before the first reboot.
Any idea why this could be happenning?

Ok apperantly i have to start the server from where the dump file actually is. found it, works now.

Related

Redis is configured to save RDB snapshots, but is currently not able to persist on disk

I get the following error, whenever I execute any commands that modify data in redis
Redis is configured to save RDB snapshots, but is currently not able to persist on disk.
Commands that may modify the data set are disabled.
Please check Redis logs for details about the error.
I installed redis using brew on mac. How can I get the location of log files where redis-server logs information to. I tried looking for redis conf. file, but couldn't find it either.
What is the default location of [1] redis conf file [2] redis log file.
How do I get rid of the above error, and be able to execute commands that modify data in redis.
When installing with brew the logfile is set to stdout. You need to edit /usr/local/etc/redis.conf and change logfile to something else. I set mine to:
logfile /var/log/redis-server.log
You'll also make sure the user that runs redis has write permissions to the logfile, or redis will simply fail to launch completely. Then just restart redis:
brew services restart redis
After restarting it'll take a while for the error to show up in the logs, because it happens after redis fails its timed flushes. You should be seeing something like:
[7051] 29 Dec 02:37:47.164 # Background saving error
[7051] 29 Dec 02:37:53.009 * 10 changes in 300 seconds. Saving...
[7051] 29 Dec 02:37:53.010 * Background saving started by pid 7274
[7274] 29 Dec 02:37:53.010 # Failed opening .rdb for saving: Permission denied
After a brew install it attempts to save to /usr/local/var/db/redis/ and since redis is probably running as your current user and not root, it can't write to it. Once redis has permission to write to the directory, your logfile will say:
[7051] 29 Dec 03:08:59.098 * 1 changes in 900 seconds. Saving...
[7051] 29 Dec 03:08:59.098 * Background saving started by pid 8833
[8833] 29 Dec 03:08:59.099 * DB saved on disk
[7051] 29 Dec 03:08:59.200 * Background saving terminated with success
and the stop-writes-on-bgsave-error error will no longer get raised.
So I guess it is a bit late for adding an answer here but since I wondered on your question as I had the same error. I got it solved by changing my redis.conf 's dir variable like this:
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /root/path/to/dir/with/write/access/
The default value is: ./, so depending on how you launch your redis server you might not be able to save snapshots.
Hope it helps someone !
In my case i resolved this issue with below steps
Cause : By default redis store data # ./ and if redis runs with redis user this means redis will not be able to write data in ./ file then you will face above error.
Resolution :
Step # 1 (Enter a valid location where redis can do write operations)
root#fpe:/var/lib/redis# vim /etc/redis/redis.conf
dir /var/lib/redis # ( This location must have right for redis user to write)
Step # 2 (Connect to redis cli and map directory to write and issue below variable)
127.0.0.1:6379> CONFIG SET dir "/var/lib/redis"
127.0.0.1:6379> BGSAVE -
This will enable redis to write data on dump file.
Was going through the github discussion and the proposed solution is
to run
config set stop-writes-on-bgsave-error no
in the redis-cli.
here's the link
https://github.com/redis/redis/issues/584#issuecomment-11416418
Steps to fix this error:
Go to redis cli by typing redis-cli
127.0.0.1:6379> config set stop-writes-on-bgsave-error no
after that try to set key value
127.0.0.1:6379> set test_key 'Test Value'
127.0.0.1:6379> get test_key
"Test Value"
Check the following places:
/usr/local/Cellar/redis...
/usr/local/var/log/redis.log
/usr/local/etc/redis.conf
This error often indicates an issue with write permissions, make sure you're RDB directory is writable.
It is usually because permission limits. In my case, it's redis disabled write options.
You can try to run redis-cli in the shell, and then run the following command:
set stop-writes-on-bgsave-error yes

How to sync time on host wake-up within VirtualBox?

I am running an Ubuntu 12.04-based box inside of Vagrant using VirtualBox. So far, everything is fine - except for one thing:
Let's assume that the VM is running. Then, the host goes to standby-mode. After waking it up again, the VM is still running, but its internal clock continues where it stopped when the host went down. So this basically means: Put the host to sleep for 15 minutes, wake it up again, then the VM's internal clock is 15 minutes late.
How can I fix this (setting the time manually is not an option for obvious reasons ;-))? Is there a way to run a script inside of a Vagrant VM whenever the host system changes its state?
I've read in the documentation that by default the VirtualBox Guest Additions sync the time with the host every 10 seconds. Apparently this is not happening, but I can not find any place where it is disabled. So any ideas?
PS: The Guest Additions are installed and match the version of VirtualBox being used.
The documentation lacks some details here.
What VirtualBox does every 10 seconds is just slight adjustement (something like 0.005 seconds). Only when the time difference reaches a threshold (20 minutes by default) a "real" resync is done.
You can reduce the thresold (i.e. to 10 seconds) with the following command:
VBoxManage guestproperty set <vm-name> "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold" 10000
Summarizing answers of #zilupe and #Slobodan Kovacevic, solution is to add following to Vagrantfile:
config.vm.provider 'virtualbox' do |vb|
vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 1000 ]
end
This will synchronize clocks each time when desync becomes > 1s (1000ms)
I give an other solution to sync time between guest & host without installing Virtualbox guest addition:
install ntp on your guest, and de-comment these lines in /etc/ntp.conf:
disable auth
broadcastclient
Then, restart ntp with service ntp restart
Active broadcast on your host:
For Linux users, edit your /etc/ntp.conf file and configure broadcast (you must adapt IP):
broadcast 192.168.123.255
For Windows users, activate the "Windows Time" service. You can then read this page to configure it to broadcast time
Then, restart time service on host.
For me to get timesync working I had to do this:
vboxmanage setextradata «machine-name» "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" 0
It turns the timesync on. It was, for some reason, off.
I found a solution:
install ntpdate
add "s" permission for ntpdate, this allows non-root users to run ntpdate as root: sudo chmod u+s /usr/sbin/ntpdate
add one line in ~/.bashrc: ntpdate -u ntp.ubuntu.com
After that, each time you login to the linux system, the time will be sync once.
you can install the VirtualBox Guest Additions in the VM to sync the time automatically by VB.

Get Apache Vhosts from Currently Running Apache

So... I accidentally deleted the vhosts files in my sites-available folder.
I would like to get my vhosts back. Is there any way to get it from the currently running apache config? I have not restarted yet.
This person says no, but this was a few years ago.
Apache : Recover "sites-enabled" config files
Remote disk recovery on a VPS -- here we go!
First, try:
lsof | grep /etc/apache2. If you see something like:
apache 1224 www-data 22r REG 8,5 1282410 1294349 /etc/apache2/sites-available/foo
you're in luck! From extundelete's website:
If you think the file may be still open by some program (for example,
if it is a movie file currently being played by a movie player), and
you know the filename, then first follow this procedure:
lsof | grep "/path/to/file"
progname 5559 user 22r REG 8,5 1282410 1294349 /path/to/file
Notice the number in the second column is 5559 and the
number in the fourth column is 22. The command to restore that file
is:
cp /proc/5559/fd/22 restored.file
If this doesn't work, well. Lots of people believe you are screwed. But I think there is hope!
Note that I rate this as a <50% chance of working, just to set expectations.
Shut off your system ASAP.
Make a full bit-for-bit backup of your disk image over the network.
On a second Linux machine, do apt-get install extundelete.
Run extundelete on that disk image and see what you can get back.
If you can't back up your disk over the network (not enough space, no access to another Linux box) you can try booting into Linode recovery mode and attempting extundelete on the disk directly. This risks data corruption, so don't do it if you really value the disk -- or, again, back it up first.
Of course -- nagging time -- the best solution is to have backups turned on in the first place.

Redis crashes instantly without error

I've got redis installed on my VM, and I haven't used it in a while. (Last I was using it, it did work, and now it doesn't.. nothing's changed in that time (about a month)). Needless to say I'm deeply confused but I'll post as much info as I can.
$ redis-server
Server starts, but throws a warning about overcommit memory being set to 0. I'm on a VM, so I can't change this setting from 0 to 1 if I wanted, which I wouldn't want to anyway for my purposes. I've written a custom redis.config file though, which I want it to use (and which I was using in the past), so starting it with the default config file doesn't do me much good. Let's try this again.
$ redis-server redis.config
$
Nothing. Silence. No error message, just didn't start.
$ nohup redis-server redis.config > nohup.out&
I get a process ID, but then $ ps and I see the the process is listed as stop and shortly disappears. Again, no errors, and no output in nohup.out nor in the log file for redis. Below is the redis.config I'm using (without the comments to keep it short)
daemonize yes
pidfile [my-user-account-path]/redis/redis.pid
port 0
bind 127.0.0.1
unixsocket [my-user-account-path]/tmp/redis.sock
unixsocketperm 770
timeout 10
tcp-keepalive 60
loglevel warning
logfile [my-user-account-path]/redis/logs/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression no
rdbchecksum no
dbfilename dump.rdb
dir [my-user-account-path]/redis/db
slave-serve-stale-data yes
slave-priority 100
appendonly no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
# ADVANCED CONFIG is set to all default settings#
I'm sure it's probably something stupid, probably even a permissions thing somewhere (I've tried executing this as root, fyi), to no avail. Anyone ever experience something similar with Redis?
i have been experiencing redis crashes as well. just an fyi - the guy responsible for much of redis' development, Salvatore Sanfilippo, aka antirez, keeps an interesting blog that has some insight on redis crashes:
http://antirez.com/news/43

S3 Error: The difference between the request time and the current time is too large

I have error The difference between the request time and the current time is too large when call method amazons3.ListObjects
ListObjectsRequest request = new ListObjectsRequest() {
BucketName = BucketName, Prefix = fullKey
};
using (ListObjectsResponse response = s3Client.ListObjects(request))
{
bool result = response.S3Objects.Count > 0;
return result;
}
What it could be?
The time on your local box is out of sync with the current time. Sync up your system clock and the problem will go away.
For those using Vagrant, a vagrant halt followed by vagrant up worked for me.
The clock is out of sync.
I followed the steps in this post to get it working again, but also had to run the following command.
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
If at any time you get a message saying the NTP socket is still in use, stop it with sudo /etc/init.d/ntp stop and re-run your command.
I had the same error and I'm using Docker for Mac. Simply restarting Docker worked for me.
On WSL 2 or any Deb-based Linux (Ubuntu, Mint ...):
Check date:
date
Now run:
sudo apt install ntpdate
sudo ntpdate time.nist.gov
Output example:
18 Feb 14:27:36 ntpdate[24008]: step time server 132.163.97.4 offset 1009.140848 sec
Check date again:
date
Alternatively look for correctClockSkew option in AWS CLI/SDK config, and set it to true
For those using Docker in Windows try restarting the Docker Engine in Setting->Reset->Restart Docker.
In case anyone finds this using Laravel and Homestead, simply running
homestead halt
followed by
homestead up
And you're good to go again.
2021 answer:
AWS.config.update({
accessKeyId: 'xxx',
secretAccessKey: 'xxxx',
correctClockSkew: true
});
As other's have said, your local clock is out of sync with AWS. You can keep it synced to Amazon's servers directly using NTP so you won't have to worry about clock drift now or in the future.
Note: The below instructions are for *nix users. I've added a comment with how you might do it in Windows, but as a non-Windows user I can't verify their accuracy.
To install NTP, simply choose one of the following, depending on your distribution:
apt-get install ntp
or
yum install ntp
etc.
Configure NTP to use Amazon servers, like so:
vim /etc/ntp.conf
And in it, comment out the default servers and add these:
server 0.amazon.pool.ntp.org iburst
server 1.amazon.pool.ntp.org iburst
server 2.amazon.pool.ntp.org iburst
server 3.amazon.pool.ntp.org iburst
Restart ntp service:
sudo service ntp restart
Source: https://allcloud.io/blog/how-to-fix-amazon-s3-requesttimetooskewed/
And a more general article on keeping your time synchronized with NTP:
https://www.digitalocean.com/community/tutorials/how-to-set-up-time-synchronization-on-ubuntu-12-04
This can also be caused by using async/await with the construction of the request object outside the task and the actual call to AWS inside the task. If there are lots of tasks running and the task isn't scheduled in time, or there is some other operation delaying the actual call to AWS, this exception may be thrown. This is more common than you might guess because the default task scheduler does not process tasks in FIFO order, resulting in starvation for some tasks, especially under heavy load.
This reset my system clock correctly on OSX. S3 uploads using the JS SDK works for me now in local dev
ntpdate us.pool.ntp.org
Read more about this here
if this problem in you localhost for windows 10
set time automatically ON and set time zone automatically ON
this solve my problem.
If you get this error in windows follow these steps to solve your problem.. Change your local time setting:
step 1: click on change date and time settings
step 2: from the popup Date and Time window click on Internet Time Tab
step 3: next Click on Change Settings
step 4: from the Server drop down select time.nist.gov or check this website
step 5: click on OK
Restart your console and check. It works...
For those facing same problem on Microsoft WLS2 Ubuntu, the only workarounds right now are:
sudo hwclock -s
Or
wsl --shutdown
Clock offset is occurring after waking up Windows from sleep. Keep an eye on https://github.com/microsoft/WSL/issues/5324 for fix from microsoft.
If you're working with a VM, restarting the VM just works on mine
If you are using a virtualbox, the time into virtual machine is sync with the time of the real machine. Just fix the time into the virtual machine will not fix the problem.
I had this error because my local machine's time and timezone were set incorrectly. Changing them to the correct time and timezone worked for me.
I had same problem in Windows 10 with Docker. You should run this commands step for step
docker run --rm --privileged alpine hwclock -s
again
docker run --rm --privileged alpine hwclock -s
and last command , don't forget to set your username and password and your timezone, to run minIO while Docker is
docker run -p 9000:9000 -e "MINIO_ACCESS_KEY=yourUserName" -e "MINIO_SECRET_KEY=YourPassword" -e "TZ=Europe/Berlin" -v /etc/localtime:/etc/localtime:ro minio/minio server /data
It is a little crude but this worked for me
Did a curl to s3 server
curl s3.amazonaws.com -v
Then got this
* Trying 52.216.141.158...
* TCP_NODELAY set
* Connected to s3.amazonaws.com (52.216.141.158) port 80 (#0)
> GET / HTTP/1.1
> Host: s3.amazonaws.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< x-amz-id-2: q2wUOf5ZC7iu2ymbRWUpZaM6GpPLLf/irrntuw/JNB7QYxDzQvcLHQbsbF2dp5zT8rBrGwqnOz0=
< x-amz-request-id: T4H1W4WKBE3F39RM
< Date: Sat, 09 Oct 2021 19:21:24 GMT
< Location: https://aws.amazon.com/s3/
< Server: AmazonS3
< Content-Length: 0
<
* Connection #0 to host s3.amazonaws.com left intact
* Closing connection 0
Got this date
Sat, 09 Oct 2021 19:21:24 GMT
Set the date in ubuntu
sudo date --set "Sat, 09 Oct 2021 19:21:24 GMT"
My code stopped throwing exceptions
Now I have a script that does this periodically every month
To get rid of this problem, you have to adjust the client's timing so that there is a maximum time stamp difference of up to 15 minutes. Also set the standard time and zone for your system.
Check the full details here.
I have the exact same error message but it's not the same cause as any of the others above.
In my case I have a React browser app doing something like this:
import { Storage } from '#aws-amplify/storage'
...
await Promise.all(files.map(file => Storage.put(...)))
I am uploading a lot of files over a slow network connection.
With this code, the promises are all started at once, so the request time for all the requests is the same, but because the browser (or amplify?) is throttling the number of concurrent connections, the later requests don't actually hit the server until more than 15 minutes after they were created.
The solution is to limit the concurrency of the promise creation - e.g. use something like bluebird Promise.map with the concurrency option
Using ntp may not work on all version of your Linux based server (e.g. an out of date Ubuntu server version that is no longer supported which will block you from downloading ntp if it is not already installed).
If this is your situation, you can set independent time zones for your Linux VM:
https://community.rackspace.com/products/f/25/t/650
After you do this you may need to reset the time/date. Instructions for doing this are in this article:
http://codeghar.wordpress.com/2007/12/06/manage-time-in-ubuntu-through-command-line
If u are in 2016 and in Istanbul here is a weird situation that Turkey decided not to switch to winter time standards anyway set your local timezone to Moscow then restart your machine.
I ran into this issue running Jet (Codeship) and Terraform on MacOS using Docker for Mac Beta channel 1.13.1-beta42.
Failed to read state: Error reloading remote state: RequestTimeTooSkewed: The difference between the request time and the current time is too large.
status code: 403, request id: 9D32BA2A5360FC18
This was resolved by restarting Docker.
I've just started getting this error, and syncing my clock doesn't help. (I've spent 2 hours syncing it to every timeserver I can find, including the AWS servers, but nothing makes a difference.)
Exactly the same thing started happening a year ago on Dec 31 2017. In that case, rebooting my system, and rebuilding my server (that uses the aws java sdk) fixed it. I don't know why. I assumed that AWS had some end-of-year timezone peculiarity. It's also possible that while I was doing these things, AWS timeservers fixed themselves. I have no way to test that hypothesis.
Now, the same thing has suddenly started to happen on Dec 30, 2018. It's not right at year-end, but close enough to seem suspicious. (Never got this error except on these dates.) Rebooting and rebuilding isn't helping this time.
My dev environment on this box is Windows 10 under Parallels. Nothing else on my system has changed - as I've double-checked by rolling back to prior Parallels snapshots. The clocks on both my host MacOS and the virtual Windows 10 are correct.
I'm suspecting an AWS bug.
Rebooting my windows server fixed it for me
The time was identical to ~1 second to the site time.in, so it wasn't off.
I was running into the same issue on my Mac. When I moved to a different timezone(PST to IST), somehow OSX was not picking timezone and time change automatically. So I had to set the two manually and that caused a lag of some 15-20 seconds on my laptop. After setting the automatic sync, the time got synched and the S3 copy command started working: For reference
You can use this tool for organizing your time with AWS and local system.
To synchronize time:
sudo yum -y install chrony
sudo systemctl enable chronyd
sudo systemctl start chronyd
This issue generally occurs when s3cmd client machine time is not synced with server.
Check time of both machine.
either sync time between them using date command
Client# sudo date --set="string"
Client# sudo date --set="15 MAY 2011 1:40 PM"
or
install chrony and restart its service on both machine
Client# sudo apt-get install chrony
Client# vi /etc/chrony/chrony.conf
pool ntp-server iburst
Client# sudo systemctl restart chronyd