I use Redis as my queuing engine, and now there are millions of items in my queue. I need to find an item there, and watch its properties.
If it was SQL Server or any type of RDBMS, I could use SQL language and execute a query against database to find the record. But in Redis queue, I can only push from one side, and pop from the same side, or the other side.
How can I do that?
With the vague nature of the question, we can only give you a vague answer.
You need to create secondary indexes to store find your data. As you already have it figured out that you cannot run SQL like queries therefore you should look at the following link
http://redis.io/topics/indexes
One important point that you should consider is the following (taken from the above link)
Implementing and maintaining indexes with Redis is an advanced topic, so most users that need to perform complex queries on data should understand if they are better served by a relational store. However often, especially in caching scenarios, there is the explicit need to store indexed data into Redis in order to speedup common queries which require some form of indexing in order to be executed.
Related
I'm trying to build a freelance platform with using MongoDB as main database and RedisDB for caching, but really couldn't figure out which way is the proper way of caching. Basically I'll store jwt tokens, verification codes and other stuff with expiration date. On the other hand, let's say I'll store 5 big collection as Gigs, JobOffers, Reviews, Users, Companies. I also want to use query them.
Example Use Case 1
Getting job offers only categorised as "Web Design"
Example Use Case 2
Getting job offers only posted by Company X
Option 1
for these two queries i can create two hashes
hash1
"job-offers:categoryId", jobOfferId, JobOffer
hash2
"job-offers:companyId", jobOfferId, JobOffer
Option 2
Using RedisJson and RedisSearch for querying and holding everything in JSON format
Option 3
Using redisSearch with creating multiple hashes
I couldn't figure out which approach will be best, or is there any other approach which is better than both of them.
Option 1 seems like suitable for your scenario. Binding job offers with category or company ids is the smartest solution.
You can use HGETALL to get all fields data from your hashset.
When using redis as a request caching mechanism, please remember that you have to keep redis cache updated consistently if it is generated from sql or no-sql db.
good question
as far as I can see, data of redis/part of mongo is stored on RAM, and RAM is more expensive than hard disk, if you don`t care about the price, and you can handle the situations by redis/mongo, and the data can be recovered from AOF/RDB files(or things like that), you can use whichever you want
If you do care about the price of RAM, probably just use a mysql and use
engine of InnoDB cuz it is cheap and on disk and it can recover and you know a lot of people use them(mysqls,postgres)
If I were you, I probably would choose mysql InnDB, and make the right index, it is fast enough for tables that hold millions of rows.(will get not so good if there are hundreds million rows)
I see, that by using RethinkDB connector one can achieve real time querying capabilites by subscribing into specifically named lists. I assume, that this is not actually the fastest solution, as the query probably updates only after changes to records are written to the database. Is there any recommended approach to achieve realtime querying capabilites deepstream-side?
There are some favourable properties like:
Number of unique queries is small compared to number of records or even number of connected clients
All manipulation of records that are subject to querying is done via RPC.
I can imagine multiple ways how to do that:
Imitate the rethinkdb connector approach. But for that I am missing a list.listen() method. With that I would be able to create a backend process creating a list on-demand and on each RPC CRUD operation on records update all currently active lists=queries.
Reimplement basic list functionality in records and use the above approach with now existing .listen()
Use .listen() in events?
Or do we have list.listen() and I just missed it? Or there is more elegant way how to do it?
Great question - generally lists are a client-side concept, implemented on top of records. Listen notifies you about clients subscribing to records, not necessarily changing them - change notifications arrive via mylist.subscribe(data => {}) or myRecord.subscribe(data => {}).
The tricky bit is the very limited querying capability of caches. Redis has a basic concept of secondary indices that can be searched for ranges and intersection, memcached and co are to my knowledge pure key-value stores, searchable only by ID - as a result the actual querying would make most sense on the database layer where your data will usually arrive in significantly less than 200ms.
The RethinkDB search provider offers support for RethinkDB's built in realtime querying capabilites. Alternatively you could use MongoDB and trail its operations log or use PostGres and deepstream's built in subscribe feature for change notifications.
I have some data from an API I need to cache. This data I want invalidated after X days, but I want it available locally to save time querying and compiling things for the end user.
Presently I have a PostgreSQL database. I want to keep this around because there's permanent data like user records I don't want to put in Mongo (unless you guys can convince me otherwise). I really have nothing against Mongo, but I can normalize some things with users and the only way I could think to do it without massive amounts of duplication is via PostgreSQL.
Now my API data is flat, and in JSON. I don't need to create any sort of link to any other table and it has a field that I can use as a key pretty easily. My idea is to literally "throw" the data into a Mongo instance and query as needed, invaliding every X days. This also offers some persistence should the server go down for whatever reason.
So my questions to you guys are this. Is this a good use case for Mongo over memcached? Should I just memcached the raw data instead? If you guys do suggest Mongo, should I move my users table and the relations over to Mongo as well?
Thanks!
This is the sort of thing Redis is really good for. Redis, possibly with selective cache invalidation via PostgreSQL's LISTEN and NOTIFY, is a pretty low pain way to manage caching.
Another option is to use UNLOGGED tables in PostgreSQL.
I have 3,5 millions records (readonly) actually stored in a MySQL DB that I would want to pull out to Redis for performance reasons. Actually, I've managed to store things like this into Redis :
1 {"type":"Country","slug":"albania","name_fr":"Albanie","name_en":"Albania"}
2 {"type":"Country","slug":"armenia","name_fr":"Arménie","name_en":"Armenia"}
...
The key I use here is the legacy MySQL id, so with some Ruby glue, I can break as less things as possible in this existing app (and this is a serious concern here).
Now the problem is when I need to perform a search on the keyword "Armenia", inside the value part. Seems like there's only two ways out :
Either I multiplicate Redis index :
id => JSON values (as shown above)
slug => id (reverse indexing based on the slug, that could do the basic search trick)
finally, another huge index specifically for autocomplete, as shown in this post : http://oldblog.antirez.com/post/autocomplete-with-redis.html
Either I use sunspot or some full text search engine (unfortunatly, I actually use ThinkingSphinx which is too much tied to MySQL :-(
So, what would you do ? Do you think the MySQL to Redis move of a single table is even a good idea ? I'm afraid of the Memory footprint those gigantic Redis key/values could take on a 16GB RAM Server.
Any feedback on a similar Redis usage ?
Before I start with a real answer, I wanted to mention that I don't see a good reason for you to be using Redis here. Based on what types of use cases it sounds like you're trying to do, it sounds like something like elasticsearch would be more appropriate for you.
That said, if you just want to be able to search for a few different fields within your JSON, you've got two options:
Auxiliary index that points field_key -> list_of_ids (in your case, "Armenia" -> 1).
Use Lua on top of Redis with JSON encoding and decoding to get at what you want. This is way more flexible and space efficient, but will be slower as your table grows.
Again, I don't think either is appropriate for you because it doesn't sound like Redis is going to be a good choice for you, but if you must, those should work.
Here's my take on Redis.
Basically I think of it as an in-memory cache that can be configured to only store the least recently used data (LRU). Which is the role I made it to play in my use case, the logic of which may be applicable to helping you think about your use case.
I'm currently using Redis to cache results for a search engine based on some complex queries (slow), backed by data in another DB (similar to your case). So Redis serves as a cache storage for answering queries. All queries either get served the data in Redis or the DB if it's a cache-miss in Redis. So, note that Redis is not replacing the DB, but merely being an extension via cache in my case.
This fit my specific use case, because the addition of Redis was supposed to assist future scalability. The idea is that repeated access of recent data (in my case, if a user does a repeated query) can be served by Redis, and take some load off of the DB.
Basically my Redis schema ended up looking somewhat like the duplication of your index you outlined above. I used sets and sortedSets to create "batches / sets" of redis-keys, each of which pointed to specific query results stored under a particular redis-key. And in the DB, I still had the complete data set and an index.
If your data set fits on RAM, you could do the "table dump" into Redis, and get rid of the need for MySQL. I could see this working, as long as you plan for persistent Redis storage and plan for the possible growth of your data, if this "table" will grow in the future.
So depending on your actual use case and how you see Redis fitting into your stack, and the load your DB serves, don't rule out the possibility of having to do both of the options you outlined above (which happend in my case).
Hope this helps!
Redis does provide Full Text Search with RediSearch.
Redisearch implements a search engine on top of Redis. This also enables more advanced features, like exact phrase matching, auto suggestions and numeric filtering for text queries, that are not possible or efficient with traditional Redis search approaches.
I am using Action Filter Attributes for loging user activity on certain action which has SQL database interaction. Similarly I can log the activity in the SQL tables using triggers on tables during each activity on the tables. I would like to know which of the above two methods is a best practice ( perfomance wise )
I think that the actionfilter is certainly the cleanest and best practice appraoch since it is in the application layer. Part of the benefit of being there is its managed code and if something breaks you can easily locate the problem. There is also the benefit that all your code is in one spot too.
Database triggers are a big no no in many companies since they have a habit of causing infinite loop well an unknowing programmer creates some logic that steps on the trigger over and over again causing the database to fail. Some companies do allow triggers but very well documented and very lightly used. Hope this helps.
Performance of logging depends greatly on the system architecture. If you have 3 load balanced web servers hitting one main database, triggers would have to handle all the load while Action Filters would split the load in three. In that scenario, Action Filters would be better.
In terms of best practices, I wouldn't use either of those approaches. I would set up Transactional Replication to another SQL server. This approach would run without impacting performance at all. The transaction log is already being generated and replication would just spin up a separate process that's reading that log.