How to push Json files from local machine into AWS redis? - redis

I have connected to AWS Redis from the EC2 instance. I have a few JSON files which I have to upload on Redis. These files are part of our application which we want to improve the speed using Redis cache.
Below is the command we used to only connect Redis from EC2.
src/redis-cli -c -h Demo-redis-ro.test.ng.0001.aps3.cache.amazonaws.com -p 6379
How can we upload these files into the Redis cluster? Are there any commands for the EC2 instance to push data to Redis? Or any tool we can use to push files?

Related

How to access keys inside my Redis cloud database?

I've created a Redis database on the Redis cloud with AWS. Till now I've added 5 hashes(key-value pairs) to the database but I can't seem to find a way to view those hashes. Can anyone tell me how to do that?
You should be able to connect using the command line redis-cli and the host, port and password for your Redis instance, then use the command hgetall <keyname> to see the contents of the hash stored at <keyname>.
Alternatively, download a copy of the graphical RedisInsight tool, and connect that to the host, port and password you're running Redis on.

How to configure Redis in Google Cloud Platform connected to App Engine to autoscale

I want to deploy a autoscalable Redis in GCP and connect it to my app in app engine in my app.yml
Is Cloud Launcher the proper way to launch the Redis Service? I did so and selected the Redis click to deploy option (Not the bitnami one).
I configured the instances and deployed them.
After the instances were ready, the next command appeared:
gcloud compute ssh --project <project-name> --zone <zone-name> <redis-instance-name>
After having this, do I have to configure the next things?
Instance IP address (I want it to be only accessible from inside my GCP Account) (Do I need to configure the 3 instances or does the sentinel takes care of the redirection?)
Password of the sentinel redis
Firewall

Starting a Redis cluster with an RDB file

I'm trying to create a Redis cluster using an RDB file taken from a single-instance Redis server. Here is what I've tried:
#! /usr/bin/env bash
for i in 6000 6001 6002 6003
do
redis-server --port $i --cluster-config-file "node-$i.cconf" --cluster-enabled yes --dir "$(pwd)" --dbfilename dump.rdb &
done
That script starts up 4 Redis processes that are cluster enabled. It also initializes each node with the dump file.
Then I run redis-trib.rb so that the 4 nodes can find each other:
redis-trib.rb create 127.0.0.1:6000 127.0.0.1:6001 127.0.0.1:6002 127.0.0.1:6003
I get the following error:
>>> Creating cluster
[ERR] Node 127.0.0.1:6060 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
I've also tried a variant where only the first node/process is initialized with the RDB file and the others are empty. I can join the 3 empty nodes into a cluster but not the one that's pre-populated with data.
What is the correct way to import a pre-existing RDB file into a Redis cluster?
In case this is an "X-Y problem" this is why I'm doing this:
I'm working on a migration from a single-instance Redis server to an Elasticache Redis Cluster (cluster mode enabled). Elasticache can easily import an RDB file on cluster startup if you upload it to S3. But it takes a long time for an Elasticache cluster to start. To reduce the feedback loop as I test my migration code, I'd like to be able to start a cluster locally also.
I've also tried using the create-cluster utility, but that doesn't appear to have any options to pre-populate the cluster with data.

Can you keep the connection open between invocations of `aws s3` CLI?

SSH has a useful opt-in feature that allows you to reuse a connection between invocations:
ssh host 'echo example' # this opens the connection to host, and leaves it open
ssh host 'echo example2' # this reuses the connection from the previous command
Is there something similar for the AWS S3 command-line interface? For example:
aws s3 mv s3://bucketname/example1 s3://bucketname/example2
aws s3 mv s3://bucketname/example3 s3://bucketname/example4
It would be great if the first command would open a connection, and leave it open for the second to take advantage of it. This would speed up AWS S3 CLI tremendously when running a ton of small commands.
You cannot. You could always use boto3 directly, the Python SDK, or in fact any of the SDKs for any of the supported languages. It would allow you to persist a connection, but the CLI does not support that sort of thing.

How to set up a volume linked to S3 in Docker Cloud with AWS?

I'm running my Play! webapp with Docker Cloud (could also use Rancher) and AWS and I'd like to store all the logs in S3 (via volume). Any ideas on how I could achieve that with minimal effort?
Use docker volumes to store the logs in the host system.
Try S3 aws-cli to sync your local directory with S3 Bucket
aws s3 sync /var/logs/container-logs s3://bucket/
create a cron to run it on every minute or so.
Reference: s3 aws-cli