combining redis databases into single instance - redis

We have two redis instances that are overprovisioned on AWS (oldhost1, oldhost2) and I'm wondering if it's possible to combine that data into a single newhost1
Is it possible to replicate
oldhost1/db0 -> newhost1/db0
oldhost2/db1 -> newhost1/db1
I tried setting up newhost1 to replcate from oldhost1, which sucked in oldhost1's data - but then when I repointed replication at oldhost2, it clobbered the data already replicated.

Related

Redis - Cache entry evictions across multiple databases

When using multiple databases on a single redis instance and its memory is full, when I'm trying to insert new data it samples a number of keys and it applies an algorithm to them to determine which ones should be evicted.
But, if I'm using db0 and db1 and I'm trying to insert a new record into db1, will redis sample keys from the same database or does it sample them globally?
When it does eviction, Redis chooses eviction candidate from all databases.
In your case, it might evict keys from db0 or db1.

Merge two persistent caches in Apache Ignite

My application uses Apache Ignite persistent storage. For some weeks I ran the application storing the persistent data in let's say "c:\db1". Later I ran the same application with persistent data in c:\db2. The data was only stored on this one server node.
Is there a way to merge the data from db1 folder to db2 folder?
No, you can't, at least not easily.
The best way would be two start two nodes in separate clusters, one using c:\db1 and one using c:\db2 and stream data from one to the other:
Start the two clusters
Start a helper application that will load the data
In the application, start two client nodes with different configurations - one connected to the first cluster, one connected to the second
Transfer the data roughly like this (code is not tested!)
IgniteCache cache1 = client1.cache("mycache");
IgniteCache cache2 = client2.cache("mycache");
for (Cache.Entry e : cache1.query(new ScanQuery())) {
client2.put(e.getKey(), e.getValue());
}

Redis: List all data structures

I'm absolutely a newbie using redis.
I need to:
list all databases
list all data structures
I've connected to redis 4.0.11 server using redis-cli.
Redis is a key value storage not a database, You can't query or structure the Redis like you do in a database. You can only receive the relevant value from the key that you are passing.
Usually instead of database a key value storage like redis is used to to high performance key value storage and retrieve, if performance of a database is not enough.

Is it possible to perform SQL Query with distributed join over a local cache and a partitioned cache?

I am currently using apache ignite 2.3.0 and the java api. I have a data grid with two nodes and two different caches. One is local and the other partitioned.
Lets say my local cache is on node #1.
I want to perform an SQL query (SqlFieldsQuery) with distributed join so that it returns data from local cache on node #1 and data from partitioned cache on node #2.
Is it possible? Do I need to specify the join in some particular order or activate a specific flag?
All my current tests are not returning any rows from partitioned cache that are not located on same node as local cache.
I tested the same query with distributed join over two different partitioned cache with no affinity and it was able to return data from different nodes properly. Is there a reason why this wouldn't work with local cache too?
Thanks
It is not possible to perform joins (both distributed an co-located) between LOCAL and PARTITIONED caches. The workaround is to use two PARTITIONED caches.

How to remove shards in crate DB?

I am new to crate.io and I am not very familiar with the term of "sherd" and I am trying to understand why when I am running my local db it creates 4 different shards?
I need to reduce this to one single shard because it causes problems when I try to export the data from crate into json files (it creates 4 different shards!)
Most users run crate on multiple servers. To distribute the records of a table between multiple servers it needs to be splitted. One piece of that table is called shards.
To make sure that the database still has records CrateDB by defaults create on replica of each shard. A copy of the data that is located on a different server.
While the system doesn't have full copies of the shards the cluster state is yellow / underreplicated.
CrateDB running on a single node will never be able to create a redundant copy (because it is only one server).
To change the amount of replicas you can use the command ALTER TABLE my_table SET(number_of_replicas=...)