How to Create a document in birt using the redis - redis

I want to know how to create document in birt using redis.
When data is input to the redis, Is it possible to call a procedure in birt?
Otherwise Is is good if birt have to check queue of redis?

I am not aware of any Redis datasource for the most recent version of birt (4.5). You can either create a Scripted Datasource that talks directly redis (via Java using a Jedis java client library for instance) or a web API that exposes Redis. A Pojo datasource might work as well.

Related

Where to place my business logic when using redis as my core BD

Ok, so I want to make a platform based on building feeds of news that I read from RSSs. And I want to ingest data to redis using kafka, and this data in redis will be also used by other services. So I was wondering that I should implement an API to interact to my redis BD so I do not have my business logic sharded between clients doing requests to redis, I have thought of implementing a REST API inside a server which will store the core business logic. BUT, could I use LUA scripting to do so and avoid this extra node in my architecture? I mean: instead of implementing a POST in an API REST that would implement the creation of a Feed in my redis BD, I would implement a lua script to do so. And when I need an outside server to create a Feed I will call directly this lua script. This way I will reduce the round trips needed to make a change in my BD but I don't know if it can be very problematic in any way.
Lua script can't be set as a Rest Server in Redis as it can't get out of the sandbox and can't run the background.
You might want to check the Redis module RedisGears as it can run Python script and is not limited to the sandbox.
Another module you might want to check is RedisRest.

How do I distribute data into multiple nodes of redis cluster?

I have large number of key-value pairs of different types to be stored in Redis cache. Currently I use a single Redis node. When my app server starts, it reads a lot of this data in bulk (using mget) to cache it in memory.
To scale up Redis further, I want to set up a cluster. I understand that in cluster mode, I cannot use mget or mset if keys are stored on different slots.
How can I distribute data into different nodes/slots and still be able to read/write in bulk?
It's handled in redis client library. You need to find if a library exists with this feature in the language of your choice. For example, if you are using golang - per docs redis-go-cluster provides this feature.
https://redis.io/topics/cluster-tutorial
redis-go-cluster is an implementation of Redis Cluster for the Go language using the Redigo library client as the base client. Implements MGET/MSET via result aggregation.

Can multiple independent applications using Redisson share same clustered Redis?

So I would like to ask if there will be any contention issues due to shared access to the same Redis cluster by multiple separate applications which use Redisson library (each application in turn has several instances of themselves).
Does Redisson library support such use case? Or do I need to configure Redisson in each application, for example add some kind of prefix or app name (as it is possible with Quartz where you can define prefixes for tables used by separate applications having access to the same db and using Quartz independently).
Won't the tasks submitted to ExecutorService in one app be forwarded to completely another application which also uses Redisson and not to another instance of the same application?
I would recommend you to use prefix/suffix in Redisson's object names when you share same Redis setup in cluster mode across multiple independent applications.

How to migrate Redis database to Aerospike?

We have a large redis database. The number of keys exploded recently as we have ~160M keys which take 50GB+ of RAM.
What would be the best migration strategy to move all this data from Redis to Aerospike? We are planning to use Jedis later so hopefully after the migration it will be as simple as pointing our services to a new port.
Ideally we can somehow import the dump.rdb file into Aerospike.
You need to put a little bit of extra work. Aerospike now supports Redis like list and map APIs. So, the migration will not be painful. However, you need to migrate your data and application.
To migrate data, you can export Redis data in csv format using the redis-cli utility and load it into aerospike using the aerospike csv loader utility. You can parallelize the loading if you split the data into multiple csv files.
To migrate the application, it's best to use aerospike native client library for better integration. You can pick language of your choice. You should find equivalent api for most of your needs. If you already abstracted the basic calls in your application, the migration should be even more smoother as there will be few places where you need to change the calls.

What exactly is Gemfire?

I have been studying 'in-memory data grids' and saw the term 'gemfire'. I'm confused. It seems that gemfire is a term to refer to technologies that store and manipulate data like a database but in the computer memory, isn't it? What exactly is gemfire?
Which technologies can I use to work with 'in-memory data grids' in Node.js?
I saw some applications, like 'Apache Geode' and 'Pivotal gemfire'. How do I work with them? Is it like work with some cache technologies (like Redis or Memcached)? In geode's case, are the data only accessed through an API or are there other ways to access this one?
There are many products that qualify as a "in-memory data grid", GemFire is one of the leading ones. From this article the main ones are:
VMware Gemfire (Java)
Oracle Coherence (Java)
Alachisoft NCache (.Net)
Gigaspaces XAP Elastic Caching Edition (Java)
Hazelcast (Java)
Scaleout StateServer (.Net)
Most of these products have drivers in many languages. You can access data in GemFire over REST, or over the native node.js client.
Apache Geode is the open source version of GemFire. It is much more powerful than memcached and Redis; You can use Geode not only as a cache, but as a store of record (it has native persistence). It has an Object Query Language (OQL) engine built in, which allows you to query nested objects, has powerful features like Continuous Queries and replication over WAN, among others. Geode also has protocol adapters for memcached and Redis, allowing your memcached and Redis clients to connect to Geode.
I would add to the list of "In memory data grid" solutions:
Apache Ignite
Infinispan
They also provide powerful features.
For feature comparison you can use this website: https://db-engines.com/en/system/Hazelcast%3BIgnite .
Last note: GemFire is now a Pivotal solution.
GemFire is a high performance distributed data management infrastructure that sits between application cluster and back-end data sources.
With GemFire, data can be managed in-memory, which makes the access faster.
Kindly check the Link below for further details
https://www.baeldung.com/spring-data-gemfire