How to move data analytics into AWS? - gooddata

I've installed tiger and I have one problem, I hope you could help me to solve it. Suppose I install tiger at a data center (physical datacenter) either using Docker and the AIO or Kubernetes. I get it installed, I connect to data sources, I do the ETL, I create the LDM, Metrics, Insights, Dashboard KPI. However, I realized that we need to have a cloud strategy and we need to move our data analytics - on premise Tiger - to AWS. Can I shutdown then the docker image or kubernetes, SCP it to either 1. AWS EC2 instance OR 2. AWS EKS. Can someone walked me theoretically through these steps?
I suppose that datasources are not on yet on AWS and that there is a VPN connection between the on premise data center and AWS or even AWS Direct Connect between on premise data center and AWS Region for customer.

if you are thinking about moving Tiger but not data source, it would be definitely challenging because of the latency (and also security).
Well, if a customer has good and secure link between public cloud and on-premise, then it should work.
In such a case both deployments of Tiger can work fully in parallel, on top of the same data source. So such a migration would be almost zero-downtime.

Related

Is there a way or tool that push cassandra data into AWS for backup purpose?

I'm working as cassandra cluster DevOps engr. wanted to know is there a way or tool that push cassandra data into AWS for backup purpose.I have cassandra cluster that is not in AWS. I explored netflix-priam but as per my understanding it needs cassandra to be hosted on AWS itself then it takes backups on EBS. my question is why i need to install cassandra cluster on AWS if i already have on-premise working cassandra. I have also read about cassandra-snapshotter & table-snap code in github,but dont want to use that. So again asking, is there such tool other than tablesnap,cassandra-snapshotter & Netflix-priam ??
Please help
Thanks

Using Kubernetes Persistent Volume for Data Protection

To resolve a few issues we are running into with docker and running multiple instances of some services, we need to be able to share values between running instances of the same docker image. The original solution I found was to create a storage account in Azure (where we are running our kubernetes instance that houses the containers) and a Key Vault in Azure, accessing both via the well defined APIs that microsoft has provided for Data Protection (detailed here).
Our architect instead wants to use Kubernetes Persitsent Volumes, but he has not provided information on how to accomplish this (he just wants to save money on the azure subscription by not having an additional storage account or key storage). I'm very new to kubernetes and have no real idea how to accomplish this, and my searches so far have not come up with much usefulness.
Is there an extension method that should be used for Persistent Volumes? Would this just act like a shared file location and be accessible with the PersistKeysToFileSystem API for Data Protection? Any resources that you could point me to would be greatly appreciated.
A PersistentVolume with Kubernetes in Azure will not give you the same exact functionality as Key Vault in Azure.
PesistentVolume:
Store locally on a mounted volume on a server
Volume can be encrypted
Volume moves with the pod.
If the pod starts on a different server, the volume moves.
Accessing volume from other pods is not that easy.
You can control performance by assigning guaranteed IOPs to the volume (from the cloud provider)
Key Vault:
Store keys in a centralized location managed by Azure
Data is encrypted at rest and in transit.
You rely on a remote API rather than a local file system.
There might be a performance hit by going to an external service
I assume this not to be a major problem in Azure.
Kubernetes pods can access the service from anywhere as long as they have network connectivity to the service.
Less maintenance time, since it's already maintained by Azure.

Prometheus target management

We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.

minio: What is the cluster architecture of minio.io object storage server?

I have searched minio.io for hours but id dosn't provide any good information about clustering, dose it has rings and instance are connected? or mini is just for single isolated machine. And for running a cluster we have to run many isolated instance of it and the our app choose to which instance we write?
if yes:
When I write a file to a bucket does minio replicate it between multi server?
I is it like amazon s3, or openstack swift that support of storing multi copy of object in different servers (and not multi disk on the same machine).
Here is the document for distributed minio: https://docs.minio.io/docs/distributed-minio-quickstart-guide
From what I can tell, minio does not support clustering with automatic replication across multiple servers, balancing, etcetera.
However, the minio documentation does say how you can set up one minio server to mirror another one:
https://gitlab.gioxa.com/opensource/minio/blob/1983925dcfc88d4140b40fc807414fe14d5391bd/docs/setup-replication-between-two-sites-running-minio.md
Minio also Introduced Continuous Availability and Active-Active Bucket Replication. CheckoutTheir active-active Replication Guide

Amazon web services: Where to start

I am a recent grad and wanted to learn about doing web application using AWS. I have gone through the documentation and ran their sample Travel Log application Successfully.
But still I am not clear about the terminologies used. can anyone explain me the difference between Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon SimpleDB in simple words.
I am looking to come up with a web app that has a signin page and people posting some text there. may i know what services of amazon would be required for me to build this app.
Thanks
Amazon Simple Storage Service (S3) is for load static content , maybe images, videos, or something you want to save, You could think of it like a hard drive for storage.
Amazon Elastic Compute Cloud: ( EC2) basically is your Virtual Operative System, you can install whatever OS you want (Debian, Ubuntu, Fedora, Centos, Windows Server, Suse enterprise). ( if your application uses server side processing this will be its home)
Amazon Simple DB, is a no-sql database system, that you could use for your aplications, and Amazon gives you as a service, but if you want to use something more, you could install yours on EC2, or use RDS for Database server (MySql for example)
If you want to know more, there are some books, like: "programming Amazon EC2" or see Amazon screencast at http://www.youtube.com/user/AmazonWebServices or its presentation on http://www.slideshare.net/AmazonWebServices
Amazon Simple Storage Service (Amazon S3)
Amazon S3 (Simple Storage Service) is a scalable, high-speed, low-cost web-based service designed for online backup and archiving of data and application programs. It allows to upload, store, and download any type of files up to 5 TB in size. This service allows the subscribers to access the same systems that Amazon uses to run its own web sites. The subscriber has control over the accessibility of data, i.e. privately/publicly accessible.
Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.
Amazon SimpleDB
Amazon SimpleDB is a highly available NoSQL data store that offloads the work of database administration. Developers simply store and query data items via web services requests and Amazon SimpleDB does the rest.
Unbound by the strict requirements of a relational database, Amazon SimpleDB is optimized to provide high availability and flexibility, with little or no administrative burden. Behind the scenes, Amazon SimpleDB creates and manages multiple geographically distributed replicas of your data automatically to enable high availability and data durability. The service charges you only for the resources actually consumed in storing your data and serving your requests. You can change your data model on the fly, and data is automatically indexed for you. With Amazon SimpleDB, you can focus on application development without worrying about infrastructure provisioning, high availability, software maintenance, schema and index management, or performance tuning.
For more information, go through these:
https://aws.amazon.com/simpledb/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html
https://www.tutorialspoint.com/amazon_web_services/amazon_web_services_s3.htm
Amazon S3 is used for storage of files. It is basically like the hard drives like on your system you use C or D your files. If you are developing any application you can use S3 for storing the static files or any backup files.
Amazon EC2 is exactly like your physical machine. Only difference is EC2 is on cloud. You can install and run software, applications store files exactly you do on your physical machines.
Amazon Simple DB is a a database on cloud. you can integrate it with your application and make queries.