AWS Cross account redis replication - redis

I want to copy the redis cluster data from AWS account A to AWS Acount B.
I have been searching for multiple tools but couldn't find any specific which can solve this issue.
It would be great if someone can provide any references for tools.
Thanks in advance.

Related

How to move data analytics into AWS?

I've installed tiger and I have one problem, I hope you could help me to solve it. Suppose I install tiger at a data center (physical datacenter) either using Docker and the AIO or Kubernetes. I get it installed, I connect to data sources, I do the ETL, I create the LDM, Metrics, Insights, Dashboard KPI. However, I realized that we need to have a cloud strategy and we need to move our data analytics - on premise Tiger - to AWS. Can I shutdown then the docker image or kubernetes, SCP it to either 1. AWS EC2 instance OR 2. AWS EKS. Can someone walked me theoretically through these steps?
I suppose that datasources are not on yet on AWS and that there is a VPN connection between the on premise data center and AWS or even AWS Direct Connect between on premise data center and AWS Region for customer.
if you are thinking about moving Tiger but not data source, it would be definitely challenging because of the latency (and also security).
Well, if a customer has good and secure link between public cloud and on-premise, then it should work.
In such a case both deployments of Tiger can work fully in parallel, on top of the same data source. So such a migration would be almost zero-downtime.

Is there a way or tool that push cassandra data into AWS for backup purpose?

I'm working as cassandra cluster DevOps engr. wanted to know is there a way or tool that push cassandra data into AWS for backup purpose.I have cassandra cluster that is not in AWS. I explored netflix-priam but as per my understanding it needs cassandra to be hosted on AWS itself then it takes backups on EBS. my question is why i need to install cassandra cluster on AWS if i already have on-premise working cassandra. I have also read about cassandra-snapshotter & table-snap code in github,but dont want to use that. So again asking, is there such tool other than tablesnap,cassandra-snapshotter & Netflix-priam ??
Please help
Thanks

EMR 5.4.0 High Availability

Does EMR 5.4.0 supports HA for Resource Manager, Namenode and Hive? if not any road map for the same?
i am not able to get it from the EMR documentation site
https://docs.aws.amazon.com/emr/latest/ReleaseGuide
please suggest if you find any useful document
As of Feb 2018 AWS EMR has a single point of failure at Master Node.
EMR FAQ
Look at :
Q: If the master node in a cluster goes down, can Amazon EMR recover it?
If HA is necessary requirement then you would want to consider either Cloud Offerings from Cloudera/Hortonworks/Mapr or Custom Installations on AWS EC2s.
To help people coming to this post for EMR HA queries.
Finally, as of writing this on may 2019, HA is made GA on AWS EMR. More information in the AWS post below.
https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-emr-announces-support-for-multiple-master-nodes-to-enable-high-availability-for-EMR-applications/

Elasticache with Redis - Client sdks

I have a web farm in amazon and one of my sites need some caching.
I am considering the use of Elasticache redis.
Can anyone shed some ligth on how I would connect and interact with this cache?
I have read about several client sdks like stackexchange redis, service stack etc.
.NET is my preferred platform.
Can these client sdks be used to interact with redis on elasticache?
Anyone know about some documentation and/or code examples using elasticache redis (with the stackexchange redis sdk)?
Im guessing I will have to authenticate using a key / secret pair, is this supported in any of these client sdks?
thanks in advance!
Lars
Elasticache is connected to the same way you connect to any other Redis instance. Once you create a new Elasticache instance, you'll be given the hostname to connect to. No need for secret/key pair. All access to the Redis instance there is configured through security groups just like with other AWS instances in EC2, RDS, etc...
With that said, there are two important caveats:
You will only be able to connect to elasticache from within the region and/or VPC in which it's launched, even if you open up the security group to outside IPs (for me, this is one of the biggest reasons not to use Elasticache).
You cannot set a password on your Redis instance. Anyone on a box that is given access to the instance in security groups (keeping in mind the limitations from caveat 1) will be able to get access to your Redis instance with full rights to add/delete/modify whatever keys they like. This is the other big reason not to use Elasticache, though it certainly still has use-cases where these drawbacks are less important.

Monitoring Amazon S3 logs with Splunk?

We have a large extended network of users that we track using badges. The total traffic is in the neighborhood of 60 Million impressions a month. We are currently considering switching from a fairly slow, database-based logging solution (custom-built on PHP—messy...) to a simple log-based alternative that relies on Amazon S3 logs and Splunk.
After using Splunk for some other analyisis tasks, I really like it. But it's not clear how to set up a source like S3 with the system. It seems that remote sources require the Universal Forwarder installed, which is not an option there.
Any ideas on this?
Very late answer but I was looking for the same thing and found a Splunk app that does what you want, http://apps.splunk.com/app/1137/. I have yet not tried it though.
I would suggest logging j-son preprocessed data to a documentdb database. For example, using azure queues or simmilar service bus messaging technologies that fit your scenario in combination with azure documentdb.
So I'll keep your database based approach and modify it to be a schemaless easy to scale document based DB.
I use http://www.insight4storage.com/ from AWS Marketplace to track my AWS S3 storage usage totals by prefix, bucket or storage class over time; plus it shows me the previous versions storage by prefix and per bucket. It has a setting to save the S3 data as splunk format logs that might work for your use case, in addition to its UI and webservice API.
You use Splunk Add-On for AWS.
This is what I understand,
Create a Splunk instance. Use the website version or the on-premise
AMI of splunk to create an EC2 where splunk is running.
Install Splunk Add-On for AWS application on the EC2.
Based on the input logs type (e.g. Cloudtrail logs, Config logs, generic logs, etc) configure the Add-On and supply AWS account id or IAM Role, etc parameters.
The Add-On will automatically ping AWS S3 source and fetch the latest logs after specified amount of time (default to 30 seconds).
For generic use case (like ours), you can try and configure Generic S3 input for Splunk