Update redis server from 1.2.6 to latest - redis

I need to update redis server.
I found a way to save DB on disk and after restore it, but my question is will new redis server have problems with read old DB structure?

The version of the dump file is encoded in the first 9 characters. So the following command can be used to check it:
$ head -1 dump.rdb | cut -c1-9
REDIS0002
Redis 1-2-6 used version 1 of the dump file (it can read and write only version 1)
Redis 2-4-6 is using version 2. However, it is able to read both version 1 and version 2 files. Version 2 happen to be backward compatible with version 1 anyway.
To upgrade, you can just read the version 1 dump file with a recent Redis release, and then dump the file again (it will be written with version 2 format). The new file may be smaller due to some optimizations available with recent Redis versions and the version 2 format.
Optionally, you can check the integrity of the dump file before starting the 2-4 Redis instance by using the redis-check-dump command:
$ ../redis-2.4.4/src/redis-check-dump dump.rdb
==== Processed 19033 valid opcodes (in 639641 bytes) ===========================
This is a pure read-only utility, it cannot harm the dump file.

Related

redis.exceptions.ResponseError: MISCONF Redis is configured to save RDB snapshots

I'm with this problem when I try to save to redis. Introducing the message below.
MISCONF Redis is configured to save RDB snapshots, but it's currently unable to persist to disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Red
The redis log file displays this:
Background saving started by pid 73
Write error saving DB on disk: Function not implemented
Has anyone ever experienced this?
I found the answer. You need to wsl 2 To find out the version run below command in PowerShell
wsl -l -v
If it is version 1, run the command below and open ubuntu again
wsl --set-version 2
More information: https://learn.microsoft.com/en-us/windows/wsl/install

Tar incremental restore : Cannot rename

I created a python script to do an incremental backup strategy on seven days whith a full backup on Sunday, using the command : tar
I have no probleme to generate my differents backups.
However, I've got an issue during trying to restore an incremental backup with this message error :
tar: Cannot rename `./path1' to `./path2': No such file or directory
tar: Exiting with failure status due to previous errors
My backups strategy run for a jenkins service.
Do you why I've got this error message which stop my restore. And do you know how to fix it
The short answer is: DO NOT use GNU's tar for incremental backups.
The long answer is, - there is pretty old bug that won't allows to restore incremental archives reliably. The bug still exists and reported multiple times since 2004.
References:
stackexchange 01,stackexchange 02,
Ubuntu-Lunchpad,
GNU 01, GNU 02, GNU 03,
Debian

how to import dump.rdb file to redis local server

Hi I'm trying to import the dump.rdb file to my local redis I'm using ubuntu 14.04,
I've tried this solutions :
backup data from server using SAVE command
Locate the location to put the dump.rdb file
Since I install redis using this tutorial, so I copy the imported dump.rdb to my redis root directory, and then started the redis server like this :
src/redis-server
and then connect the client using :
src/redis-cli
But When I tried to get the all keys using KEYS * I got (empty list or set) where did I go wrong? I've been playing this for hours, any help? thank you
If you have followed the steps correctly it will work fine.
1) Make sure the imported dump.rdb contains your data
2) Stop the redis server
3) copy the file in the correct directory (inside redis bin directory)
parallel to redis-server.
4) make sure you have the same data, that is copied. (bcz possibilites
that if your server is still running, it will replace your dump.rdb).
5) start your redis server you will surely find the values.
If it still doesn't work. Check the dbfilename in your redis.conf file.
It must be dbfilename dump.rdb. If there is a change in the location place it in the correct directory.
Hope this works.
I found the problem in my step, in the documentation quick start redis :
Using src/redis-server Redis was started without any explicit configuration file so I need to start the server with the configuration file to make the server read my dump.rdb file like this :
src/redis-server redis.conf
now I can get all the imported data.

how to start redis 3.2 using a dump.rdb generated by redis 2.8

I created a dump.rdb using redis server version 2.8.22. It is ignored when redis server 3.2 is started. Is the data format in Redis 3.2 backward compatible with that in version 2.8.22?
It is not backward compatabile. I have tested the same and it works fine. The values in dump.rdb is stored in the folder you have your executable redis-server. So make sure you copy the file in 2.8.22 to 3.2. Otherwise values on dump.rdb inside 3.2 folder alone will be shown. Also make sure your redis server is not running during this process. Also make sure you start the redis server with ./redis-server redis.conf command. Only in redis.conf you will have the path to your dump.rdb file, by default it will take the dump file in parallel to redis-server.

How to upload to compress and upload to s3 on the fly with s3cmd

I just found my box has 5% for HDD hard drive left and I have like almost 250GB of mysql bin file that I want to send to s3. We have moved from mysql to NoSQL and not currently using mysql. However I would love to preserve old data before migration.
Problem is I can't just tar the files in a loop before sending them there. So I was thinking I could gzip on the fly before sending so it doesn't store the compressed file on HDD.
for i in * ; do cat i | gzip -9c | s3cmd put - s3://mybudcket/mybackups/$i.gz; done
To test this command, I run it without the loop and it didn't send anything but didn't complain about anything either. Is there anyway of achieving that?
OS is ubuntu 12.04
s3cmd version is 1.0.0
Thank you for your suggestions.
Alternatively you can use https://github.com/minio/mc . Minio Client aka mc is written in Golang, released under Apache License Version 2.
It implements mc pipe command for users to stream data directly to Amazon S3. mc pipe can also pipe to multiple destinations in parallel. Internally mc pig streams the output and does multipart upload in parallel.
$ mc pipe
NAME:
mc pipe - Write contents of stdin to files. Pipe is the opposite of cat command.
USAGE:
mc pipe TARGET [TARGET...]
Example
#!/bin/bash
for i in *; do
mc cat $i | gzip -9c | mc pipe https://s3.amazonaws.com/mybudcket/mybackups/$i.gz
done
If you can see mc also implements mc cat command :-).
The function to allow stdin to S3 was added to Master branch in February 2014, so I guess make sure your version is newer than that? Version 1.0.0 is from 2011 and previous, the current (at time of this writing) is 1.5.2. It's likely you need to update your version of s3cmd
Other than that, according to https://github.com/s3tools/s3cmd/issues/270 this should work, save that your "do cat i" is missing the $ sign to indicate it as a variable.