In zato.io I would like to create a service and periodically pull data from ERP.
Later on that data will be pushed from zato.io to another external system.
Is it a good practice to store that data (temporarily) in zato.io's default Redis database or new instance of Redis should be deployed?
yes using zato.io's Redis database is possible.
Related
A simple question about using Redis as a persistent database (not in-memory):
Can I directly query the Redis database from my spring boot application (just like with MySQL or Oracle db) or data should always be loaded in-memory first and requests are to be executed against the in-memory data?
Thanks.
When you query data from Redis it does not load that data in memory at that point. Redis is an in-memory database, meaning it always keeps all the data in it's memory, and when you send the query to redis, it processes it against the data that is already in memory.
Redis is an in-memory database which you can treat like any other external dependency you may have in your application. Compared with the other databases you mentioned, it does not offer the ability to use SQL to query it, so you must rely on its own commands, which are very specific.
There are some Java clients you can use to interact with Redis, including Lettuce and Jedis. The commands you send to Redis are executed against the data that Redis itself keep in its own memory.
I have 3 redis instance with redis. One is the master and the other two, are the slaves. I have connected to master node and get info by redis-cli with INFO command. I can see the parameter cluster_enabled:0 and
#Replication
role:master
connected_slaves:2
slave0:ip=xxxxx,port=6379,state=online,offset=15924636776,lag=1
slave1:ip=xxxxx,port=6379,state=online,offset=15924636776,lag=0
And the keyspace, each node has different dbs. I need to migrate all data to a single memorystore in GCP but I don't know how. Anyone can help me?
Since the two nodes are slaves and clustering is not enabled, you only need to replicate the master node. RIOT is a great tool for migrating data in and out of Redis.
However, if you say DB by node do you mean redis DB that you access by select? In that case you'll need to prefix keys as there may be overlap between the keysets of the DBs.
I think setting up another Redis cluster in a single node configuration is the least of your worries.
The real challenge for you would be migrating all your records over to the new setup. This is not a simple question to answer and would depend heavily on multiple factors:
The total size of your data being migrated
Is this is a live Database in production
Do you want to keep the two DB schemas in your new configuration separate?
Ok, I believe currently your Redis Instances are hosted on Google Compute Engine.
And you are looking to migrate to Memorystore for Redis.
As mentioned here, you can leverage Redis snapshots for this. It provides you step-wise instructions on how to achieve this, leveraging GCS buckets as transient storage.
import data into Cloud Memorystore instances using RDB (Redis Database Backup) snapshots, as well as back up data from existing Redis instances.
How do you export and import data in Prometheus? How do you make sure the data is backed up if the instance gets down?
It does not seem that there is a such feature yet, how do you do then?
Since Prometheus version 2.1 it is possible to ask the server for a snapshot. The documentation provides more details - https://web.archive.org/web/20200101000000/https://prometheus.io/docs/prometheus/2.1/querying/api/#snapshot
Once a snapshot is created, it can be copied somewhere for safe keeping and if required a new server can be created using this snapshot as its database.
The documentation website constantly changes all the URLs, this links to fairly recent documentation on this -
https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis
There is no export and especially no import feature for Prometheus.
If you need to keep data collected by prometheus for some reason, consider using the remote write interface to write it somewhere suitable for archival, such as InfluxDB (configured as a time-series database).
Prometheus isn't a long term storage: if the database is lost, the user is expected to shrug, mumble "oh well", and restart Prometheus.
credits and many thanks to amorken from IRC #prometheus.
There is an option to enable Prometheus data replication to remote storage backend. Later the data collected from multiple Prometheus instances could be backed up in one place on the remote storage backend. See, for example, how VictoriaMetrics remote storage can save time and network bandwidth when creating backups to S3 or GCS with vmbackup utility.
We have a large redis database. The number of keys exploded recently as we have ~160M keys which take 50GB+ of RAM.
What would be the best migration strategy to move all this data from Redis to Aerospike? We are planning to use Jedis later so hopefully after the migration it will be as simple as pointing our services to a new port.
Ideally we can somehow import the dump.rdb file into Aerospike.
You need to put a little bit of extra work. Aerospike now supports Redis like list and map APIs. So, the migration will not be painful. However, you need to migrate your data and application.
To migrate data, you can export Redis data in csv format using the redis-cli utility and load it into aerospike using the aerospike csv loader utility. You can parallelize the loading if you split the data into multiple csv files.
To migrate the application, it's best to use aerospike native client library for better integration. You can pick language of your choice. You should find equivalent api for most of your needs. If you already abstracted the basic calls in your application, the migration should be even more smoother as there will be few places where you need to change the calls.
We have big shopping and product dealing system. We have faced lots problem with MySQL so after few r&D we planned to use Redis and we start integrating Redis in our system.
Following this previously directly hitting the database now we have moved the Redis system
User shopping cart details
Affiliates clicks tracking records
We have product dealing user data.
other site stats.
I am not only storing the data in Redis system i have written crons which moves Redis data in MySQL data at time intervals. This is the main point i am facing the issues.
Bellow points i am looking for solution
Is their any other ways to dump big data from Redis to MySQL?
Redis fail our store data in file so is it possible to store that data directly to MySQL database?
Is Redis have any trigger system using that i can avoid the crons like queue system?
Is their any other way to dump big data from Redis to MySQL?
Redis has the possibility (using bgsave) to generate a dump of the data in a non blocking and consistent way.
https://github.com/sripathikrishnan/redis-rdb-tools
You could use Sripathi Krishnan's well-known package to parse a redis dump file (RDB) in Python, and populate the MySQL instance offline. Or you can convert the Redis dump to JSON format, and write scripts in any language you want to populate MySQL.
This solution is only interesting if you want to copy the complete data of the Redis instance into MySQL.
Does Redis have any trigger system that i can use to avoid the crons like queue system?
Redis has no trigger concept, but nothing prevents you to post events in Redis queues each time something must be copied to MySQL. For instance, instead of:
# Add an item to a user shopping cart
RPUSH user:<id>:cart <item>
you could execute:
# Add an item to a user shopping cart
MULTI
RPUSH user:<id>:cart <item>
RPUSH cart_to_mysql <id>:<item>
EXEC
The MULTI/EXEC block makes it atomic and consistent. Then you just have to write a little daemon waiting on items of the cart_to_mysql queue (using BLPOP commands). For each dequeued item, the daemon has to fetch the relevant data from Redis, and populate the MySQL instance.
Redis fail our store data in file so is it possible to store that data directly to MySQL database?
I'm not sure I understand the question here. But if you use the above solution, the latency between Redis updates and MySQL updates will be quite limited. So if Redis fails, you will only loose the very last operations (contrary to a solution based on cron jobs). It is of course not possible to have 100% consistency in the propagation of data though.