When we add any records(in the form of hashes or sets), do we need to COMMIT in order to save them.
Is there a similar provision in REDIS?
I have created a virtual machine and I added records in form of hashes to my REDIS Cache on my m/c.
However,when I restart my Redis client and query for my records,they do not exist!
Sincerely appreciate anyone's reply on an urgent basis.
Thanks!
You can use AOF feature for better results
You can use SAVE as you apparently already discovered or you can use BGSAVE to run the saving task in the background and continue operating.
To see what happens "online" with your storage you can use "monitor" command. Just type it to console after redis-cli entering:
The sample:
user#user:~/Projects$ redis-cli
redis 127.0.0.1:6379> monitor
OK
1361101579.987123 "monitor"
1361102054.206754 "set" "keySample" "valSample" // in another console window I run "set keySample valSample"
Related
I want to get all the commands processed by redis without using MONITOR command because MONITOR command is used to get info of all commands at present.But this is not my case.I want to know the commands processed for last 2 days.Is it possible to see the last 2 days commands processed by redis?
No, that is not possible. You might be able to get close if you have AOF persistence enabled and it hasn't been rewritten during that time.
It seems that the only way to sync data between redis servers is to use the command slaveof, but how to know whether the data has been replicated successfully? I mean, I want to be notified just after the sync done.
I've read some resource code of redis, mainly replication.c, and find nothing official. The only way I know for now, is to use redis command info, and check a specific flag by polling, which looks bad.
Is there any better way to do this?
The way you're trying, i.e. slaveof, is to sync data between Redis master and Redis slave. Whenever some data has been written to master, it will be sync to slave. So, technically, the sync will never be DONE.
If what you want is a snapshot of current data set, you can use the BGSAVE command to save the data set into an RDB file. With the LASTSAVE command, you can check if the BGSAVE has been done. Then copy the file to another host, and load it with Redis.
I'm hoping to run a SLAVEOF command from a new redis box to migrate data from an Elasticache node to a normal EC2 box running redis. Ideally I would run something like SLAVEOF IP DB_INDEX so that I'm only pulling data from DB_INDEX on the master instead of all available databases. Is this possible?
No, you can not replicate just a single "database" in Redis. It is easier to think of these as "keyspaces" rather than individual databases. Further, according to the documentation at Elasticache the way to import data is to upload a snapshot (RDB file) - not via a replication command.
Since you are just doing a migration you could:
Replicate to a clean instance
Iterate over all databases you don't want and do a FLUSHDB (do NOT do a FLUSHALL).
Then, if you want the data to be on DB0 and it is not there, you can use the MOVE command on each key to put it in the default 0 database.
This would result in having your new instance having just the data you want there, in the "0" database - if you chose to move the keys.
I am trying to figure out something and I've been searching for a while with no results.
What happens if a Redis server loses power or gets shut down or something that would wipe the RAM? Does it keep a backup somewhere?
I am wanting to use Redis for a SaaS style app so if I go to app.com/usernamesapp it would use redis to verify usernamesapp exists and get the ID... At which point it would use MySQL for all the rest of the stuff... Reasons being I want to begin showing the page ASAP and most of the stuff is javascript so all the MySQL would happen after the fact.
Thanks
Redis can be configured to write to disk at regular intervals so if the server fails you wont lose your data.
http://redis.io/topics/persistence
From the Redis FAQ
Redis is an in-memory but persistent on disk database
So a critical failure should not result in data loss. Read more at http://redis.io/topics/faq
I'm building a Redis db which consumes nearly all of my machines memory. If Redis startes to save to disc while heavy inserting is going on, the memory consumption is more or less doubled (as described in the documentation). This leads to terrible performance in my case.
I would like to force Redis to not to store any data while I'm inserting and would trigger the save manually afterwards. That should solve my problem, but however I configure the "save" setting, at some point in time Redis starts to save to disc. Any hint how to prevent Redis from doing so?
You can disable saving by commenting all the "save" lines" in your redis.conf.
Alternately, if you don't want to edit any .conf files, run Redis with:
redis-server --save ""
As per example config (search for save):
It is also possible to remove all the previously configured save
points by adding a save directive with a single empty string argument
like in the following example:
save ""
I would also suggest looking at Persistence only slaves (Master / Slave replication) (Have the slaves persist data instead of master)
Take a look at this LINK