Is it possible to monitor only one database? - redis

Currently, my understanding is that the 'monitor' command outputs all commands received by the server, no matter which database number they are sent to.
This is a problem for me as I use one db for holding 'normal' data and one db for holding session data, and the output from the session db makes it near impossible to read the output from the other db.
Is there a way to limit the output to only one database ?

What about this?
redis-cli monitor |grep '(db 1)'
That way you would just get the output of DB 1

Databases in redis are not at all like databases in SQL. They are essentially just a predefined key prefix with no configuration of their own.
If you only want to see changes to the real data, you will need to set it up as a separate instance so that session data goes to a different process.
There isn't much overhead in doing this (in most scenarios it will actually improve performance) and there are other good reasons for using multiple instances. For example you probably want your real data written to disk in real time and backed up, but session data is worthless after a server restart so doesn't need to be saved to disk at all. With a shared instance you would have to save and back up everything, which isn't particularly good for performance with session data changing much more than permanent data.

If only this bug was resolved, the following would work:
redis-cli -n 1 monitor
Where 1 is database number.

Related

archiving some redis data to disk

I have been using redis a lot lately, and really am loving it. I am mostly familiar with persistence (rdb and aof). I do have one concern. I would like to be able to selectively "archive" some of my data to disk (or cheaper storage) once it is no longer important. I don't really want to delete it because it might be valuable at some point.
All of my keys are named id_<id>_<someattribute>. So when I am done with id 4, I want to "archive" all all keys that match id_4_*. I can view them quite easily in with the command line, but I can't do anything with them, persay. I have quite a bit of data (very large bitmaps) associated with this data set, and frankly I can't afford the space once the id is no longer relevant or important.
If this were mysql, I would have my different tables and would very easily just dump it to a .sql file and then drop the table. The actual .sql file isn't directly useful to me, but I could reimport the data if/when I need it. Or maybe I have to mysql database and I want to move one table to another database. Are there redis corollaries to these processes? Is there someway to make an rdb or aof file that is a subset of the data?
Any help or input on this matter would be appreciated! Thanks!
#Hoseong Hwang recently asked what I did, so I'm posting what I ended up doing.
It was really quite simple, actually. I was benefited by the fact that my key space is segmented out by different users. All of my keys were of the structure user_<USERID>_<OTHERVALUES>. My archival needs were on a user basis, some user's data was no longer needed to be kept in redis.
So, I started up another instance of redis-server, on another port locally (6380?) or another machine, it makes no difference. Then, I wrote a short script that basically just called KEYS user_<USERID>_* (I understand the blocking nature of KEYS, my key space is so small it didn't matter, you can use SCAN if that is an issue for you.) Then, for each key, I MIGRATED them to that new redis-server instance. After they were all done. I did a SAVE to ensure that the rdb file for that instance was up to date. And now I have that rdb, which is just the content that I wanted to archive. I then terminated that temporary redis-server and the memory was reclaimed.
Now, keep that rdb file somewhere for cheap, safe keeping. And if you ever needed it again, doing the reverse of my process above to get those keys back into your main redis-server would be fairly straightforward.
Instead of trying to extract data from a live Redis instance for archiving purpose, my suggestion would be to extract the data from a dump file.
Run a bgsave command to generate a dump, and then use redis-rdb-tools to extract the keys you are interested in - you can easily get the result as a json file.
See https://github.com/sripathikrishnan/redis-rdb-tools
You can keep the json data in flat files, or try to store them into a relational database or a document store if you need them to be indexed for retrieval purpose.
A few suggestions for you...
I would like to be able to selectively "archive" some of my data to
disk (or cheaper storage) once it is no longer important. I don't
really want to delete it because it might be valuable at some point.
If such data is that valuable, use a traditional database for storage. Despite redis supporting snap-shotting to disk and AOF logs, you should view it as mostly volatile storage. The primary use case for redis is reducing latency, not persistence of valuable data.
So when I am done with id 4, I want to "archive" all all keys that
match id_4_*
What constitutes done? You need to ask yourself this question; does it mean after 1 day the data can fall out of redis? If so, just use TTL and expiration to let redis remove the object from memory. If you need it again, fall back to the database and pull the object back into redis. That first client will take the hit of pulling from the db, but subsequent requests will be cached. If done means something not associated with a specific duration, then you'll have to remove items from redis manually to conserve memory space.
If this were mysql, I would have my different tables and would very
easily just dump it to a .sql file and then drop the table. The actual
.sql file isn't directly useful to me, but I could reimport the data
if/when I need it.
We do the same at my firm. Important data is imported into redis from rdbms executed as on-demand job. We don't drop tables, we just selectively import data from the database into redis; nothing wrong with that.
Is there someway to make an rdb or aof file that is a subset of the
data?
I don't believe there is a way to do selective archiving; it's either all or none.
IMO, spend more time playing with redis. I highly recommend leveraging out-of-box features instead of reinventing and/or over-engineering solutions to suit your needs.
Hope that helps!...

Using data from multiple redis databases in one command

At my current project I actively use redis for various purposes. There are 2 redis databases for current application:
The first one contains absolutely temporary data: how many users are online, who are online, various admin's counters. This db is cleared before the application starts by start-up script.
The second database is used for persistent data like user's ratings, user's friends, etc.
Everything seems to be correct and everybody is happy.
However, when I've started implementing a new functionality in my application, I discover that I need to intersect a set with user's friends with a set of online users. These sets stored in different redis databases, and I haven't found any possibility to do this task in redis, except changing application architecture and move all keys into one namespace(database).
Is there actually any way to perform some command in redis using data from multiple databases? Or maybe my use case of redis is wrong and I have to perform a fix of system architecture?
There is not. There is a command that makes it easy to move keys to another DB:
http://redis.io/commands/move
If you move all keys to one DB, make sure you don't have any key clashes! You could suffix or prefix the keys from the temp DB to make absolutely sure. MOVE will do nothing if the key already exists in the target DB. So make sure you act on a '0' reply
Using multiple DBs is definitely not a good idea:
A Quote from Salvatore Sanfilippo (the creator of redis):
I understand how this can be useful, but unfortunately I consider
Redis multiple database errors my worst decision in Redis design at
all... without any kind of real gain, it makes the internals a lot
more complex. The reality is that databases don't scale well for a
number of reason, like active expire of keys and VM. If the DB
selection can be performed with a string I can see this feature being
used as a scalable O(1) dictionary layer, that instead it is not.
With DB numbers, with a default of a few DBs, we are communication
better what this feature is and how can be used I think. I hope that
at some point we can drop the multiple DBs support at all, but I think
it is probably too late as there is a number of people relying on this
feature for their work.
https://groups.google.com/forum/#!msg/redis-db/vS5wX8X4Cjg/8ounBXitG4sJ

How do I change between redis database?

I am new with redis and I didn't figured out how to create and change to another redis database.
How do I do this?
By default there are 16 databases (indexed from 0 to 15) and you can navigate between them using select command. Number of databases can be changed in redis config file with databases setting.
By default, it selects the database 0. To select a specified one, use
redis-cli -n 2 (selects db 2)
Note: this is not a direct answer to the OP's question. However, this text is too long for a comment, and I thought I'd share it anyway, to clarify things to the OP. Hope I don't break too many SO rules by doing this...
Some extra info on multiple databases:
Please note that using multiple databases in one redis instance is discouraged.
It is a nice feature for playing around and getting to know redis.
In more serious setups, if you have multiple ports at your disposal, it's preferred and more performant to use separate instances. At our company, we run about 50 instances on the development/staging server, and about 5 on the production server.
The reason is, that redis transactions are only atomic within one db number anyway. Most (if not all) clients nicely seperate that for you in the connect() phase. And if you have to connect separately, it's just as easy to connect to a different port.
The core of redis is also single threaded. That's one of the things that makes redis so quick and simple. Everything is sequential. If you use multiple instances instead of just one, you gain the benefit of multi-processing (on multi-core machines).
redis-cli //connect server firstly
redis-cli info //show all existing database - at the bottom
//exit
redis-cli -n 1 //1 is the name of database

The faster method to move redis data to MySQL

We have big shopping and product dealing system. We have faced lots problem with MySQL so after few r&D we planned to use Redis and we start integrating Redis in our system.
Following this previously directly hitting the database now we have moved the Redis system
User shopping cart details
Affiliates clicks tracking records
We have product dealing user data.
other site stats.
I am not only storing the data in Redis system i have written crons which moves Redis data in MySQL data at time intervals. This is the main point i am facing the issues.
Bellow points i am looking for solution
Is their any other ways to dump big data from Redis to MySQL?
Redis fail our store data in file so is it possible to store that data directly to MySQL database?
Is Redis have any trigger system using that i can avoid the crons like queue system?
Is their any other way to dump big data from Redis to MySQL?
Redis has the possibility (using bgsave) to generate a dump of the data in a non blocking and consistent way.
https://github.com/sripathikrishnan/redis-rdb-tools
You could use Sripathi Krishnan's well-known package to parse a redis dump file (RDB) in Python, and populate the MySQL instance offline. Or you can convert the Redis dump to JSON format, and write scripts in any language you want to populate MySQL.
This solution is only interesting if you want to copy the complete data of the Redis instance into MySQL.
Does Redis have any trigger system that i can use to avoid the crons like queue system?
Redis has no trigger concept, but nothing prevents you to post events in Redis queues each time something must be copied to MySQL. For instance, instead of:
# Add an item to a user shopping cart
RPUSH user:<id>:cart <item>
you could execute:
# Add an item to a user shopping cart
MULTI
RPUSH user:<id>:cart <item>
RPUSH cart_to_mysql <id>:<item>
EXEC
The MULTI/EXEC block makes it atomic and consistent. Then you just have to write a little daemon waiting on items of the cart_to_mysql queue (using BLPOP commands). For each dequeued item, the daemon has to fetch the relevant data from Redis, and populate the MySQL instance.
Redis fail our store data in file so is it possible to store that data directly to MySQL database?
I'm not sure I understand the question here. But if you use the above solution, the latency between Redis updates and MySQL updates will be quite limited. So if Redis fails, you will only loose the very last operations (contrary to a solution based on cron jobs). It is of course not possible to have 100% consistency in the propagation of data though.

database design, question about implementation

Question regarding my sql database design for a project i am working on.
I will be receiving data every few seconds and i am going to need to store that data into a database. I am using mySQL for my DBMS. The data needs to be stored in the database with a userid attached to each piece of data. I will only be handling one user per application. So, each instance of the application will only be handling one users data. The remote database will be storing all users data though. So, that is why i need userid's to know whose data is whose.
My idea was to wait until i receive like 50 data packets and create a delimited string of all 50 data packets. (Maybe separated by commas) Then push that string to the database along with the userid. And store the data like that. My question is, is that a good way to do it? Is there a better way? Is this bad practice? TIPS PLEASE! =)
I will be receiving a lot of this data. One data packet like every second, sometimes faster. Just let me know what you think.
The DBMS will be running on a remote machine. The application will be running on an android phone.
Thanks in advance!
I would not suggest concatenating a bunch of values together to send a delimited string to the database. That just creates additional work on the database to parse the string.
Any reasonable framework for interacting with the database will let you create and send batches of SQL statements with different values for the bind variables to the database. That keeps the nice, friendly syntax of the stored procedure or INSERT statement, it keeps the database properly normalized, and it accomplishes the performance goal of minimizing the number of round-trips.
If the dbms is running on a good server, and all you do with the data is a simple insert to a reasonably simple table, 1 insert per second should not a strain at all. I'd expect it to be hardly measurable.
The question you really have to answer is the tolerance you have for losing data. A request per second transferring under 1k of data isn't much, especially using json vs. xml. Then again, battery life is something to keep in mind on mobile devices, so making a request every 5-60 seconds is also doable.
There's no reason you cannot batch your updates to the server.
If you have no tolerance for data loss, you could collect your batch of 50 updates on local storage, and upload them. If a failure occurs in transmission you can resend. In this case, however, I would want to have some record ID that's reasonably guaranteed to be unique, such as a UUID. This way the server can see which records it's already processed and exclude them from reprocessing.
I'm going to address the issue of storing it as a delimited string. HOw do you intend to query this data after it is stored? If you will need to find the data for one or aeven a small group of values but not the entire string, donot consider storing the data this way as it will give you horrible performance in querying and will be very painful to write queries for. In general, storing more than one piece of dat ina field is a bad thing, it means you need a related table.
Also, for what you are doing, if you don't need to to analytical querying of the data, perhaps a nosql database would be a better choice than a relational database.