Is it possible to look up an eks cluster without knowing the region? - amazon-eks

Ideally, I'd like to be able to do this with aws cli, but I'm open to alternatives. Assuming I'm authenticated to a particular aws account, is there any way to look up basic information about a cluster, or all clusters in the account, without knowing what region the cluster is in? I'd love a way to get information about a cluster without already knowing meta information about it. I could write a script to cycle through all regions looking for for clusters, but I hope there's a better way.

Here is a bash for loop that should do the trick:
for region in `aws ec2 describe-regions --output text | cut -f4`
do
echo -e "\nListing Clusters in region: $region..."
aws eks list-clusters --region $region --output text --no-cli-pager
done

A handy command is eksctl get cluster --all-regions -o json.

Related

duplicate kubernetes namespace with the content

How can I duplicate a namespace with all content with a new name in the same kubernetes cluster?
e.g. Duplicate default to my-namespace which will have the same content.
I'm interested just by services and deployments, so
when I try with method with kubectl get all and with api-resources i have error with services IP like :
Error from server (Invalid): Service "my-service" is invalid: spec.clusterIP: Invalid value: "10.108.14.29": provided IP is already allocated
As #coderanger mentioned in his answer, there is no straight way to make a copy of origin k8s resources to the separate namespace.
As was proposed, when you invoke kubectl get all command, k8s looks through resources catalog bounded to all category. Therefore, if you didn't add this category for each custom CRD object, throughout specific API group, you might probably miss some relevant k8s resources in the command output.
Furthermore, if you desire to export all k8s resources from the particular namespace, besides user workloads, I would recommend exploring API resources, filtering out only namespace scoped objects, and then apply bash processing to generate manifest files across each resource group:
kubectl api-resources --namespaced=true| awk '{print $1}'| sed '1d'| while read -r line; do echo "$(kubectl get $line -n namespace -o yaml > $line.yaml)"; done
You can also consider using Helm (as #David Maze pointed out in the comment), in order to manage user workloads through Helm Charts, as a more flexible and structured way to describe k8s native resources.
There is no specific way to do this. You could probably get close with something like kubectl get all -n sourcens -o yaml | sed -e 's/namespace: sourcens/namespace: destns/' | kubectl apply -f - but get all is always a bit wonky and this could easily miss weird edge cases.
You can backup your namespace using Velero and then you can restore it to another namespace or cluster!

Automate Cross region copying tables in aws redshift

I have tables in a cluster at region-1 and I want to copy some of those tables in another cluster at some other region (region-2).
Till now I have used matillion and for that I have followed following steps-
Copy data to s3 from cluster-a.
Load this data from s3 to cluster-b.
Since matillion is a little bit costly for me to do work, and I want to have an alternative solution for this.
Although I have heard about CLI, Lambda and API but I am having no idea for how should I use these, since I go through this procedure on weekly basis and I want to automate this process.
The AWS Command-Line Interface (CLI) is not relevant for this use-case, because it is used to control AWS services (eg launch an Amazon Redshift database, change security settings). The commands to import/export data to/from Amazon Redshift must be issued to Redshift directly via SQL.
To copy some tables to an Amazon Redshift instance in another region:
Use an UNLOAD command in Cluster A to export data from Redshift to an Amazon S3 bucket
Use a COPY command in Cluster B to load data from S3 into Redshift, using the REGION parameter to specify the source region
You will therefore need separate SQL connections to each cluster. Any program that can connect to Redshift via JDBC would suffice. For example, you could use the standard psql tool (preferably version 8.0.2) since Redshift is based on PostgreSQL 8.0.2.
See: Connect to Your Cluster by Using the psql Tool
So, your script would be something like:
psql -h clusterA -U username -d mydatabase -c 'UNLOAD...'
psql -h clusterB -U username -d mydatabase -c 'COPY...'
You could run this from AWS Lambda, but Lambda functions only run for a maximum of five minutes, and your script might exceed that limit. Instead, you could run a regular cron job on some machine.

How do I do an S3 copy between regions using aws cli?

It was far to difficult to figure this out. It wasn't obvious to me and lots of explanations left out key details. I will answer this with the solution. Sorry if it seems obvious to you, but given how many searches and experiments it took me to do this, I think it is quite worthwhile to show others how to do it.
UPDATE: According to a commenter, the extra parameters I show here may no longer be needed. I am not currently working with AWS, so I don't have a way to verify it. Anyway, I didn't change the rest of this post in case it is still needed in some case.
The trick is being explicit about both the source and destination regions. They might not always be required, but it doesn't hurt to always show them:
$ aws s3 cp s3://my-source-bucket-in-us-west-2/ \
s3://my-target-bucket-in-us-east-1/ \
--recursive --source-region us-west-2 --region us-east-1
Or on Windows
> aws s3 cp s3://my-source-bucket-in-us-west-2/ ^
s3://my-target-bucket-in-us-east-1/ ^
--recursive --source-region us-west-2 --region us-east-1

Amazon EMR Spark Cluster: output/result not visible

I am running a Spark cluster on Amazon EMR. I am running the PageRank example programs on the cluster.
While running the programs on my local machine, I am able to see the output properly. But the same doesn't work on EMR. The S3 folder only shows empty files.
The commands I am using:
For starting the cluster:
aws emr create-cluster --name SparkCluster --ami-version 3.2 --instance-type m3.xlarge --instance-count 2 \
--ec2-attributes KeyName=sparkproj --applications Name=Hive \
--bootstrap-actions Path=s3://support.elasticmapreduce/spark/install-spark \
--log-uri s3://sampleapp-amahajan/output/ \
--steps Name=SparkHistoryServer,Jar=s3://elasticmapreduce/libs/script-runner/script-runner.jar,Args=s3://support.elasticmapreduce/spark/start-history-server
For adding the job:
aws emr add-steps --cluster-id j-9AWEFYP835GI --steps \
Name=PageRank,Jar=s3://elasticmapreduce/libs/script-runner/script-runner.jar,Args=[/home/hadoop/spark/bin/spark-submit,--deploy-mode,cluster,--master,yarn-cluster,--class,SparkPageRank,s3://sampleapp-amahajan/pagerank_2.10-1.0.jar,s3://sampleapp-amahajan/web-Google.txt,2],ActionOnFailure=CONTINUE
After a few unsuccessful attempts... I made a text file for the output of the job and it is successfully created on my local machine. But I am unable to view the same when I SSH into the cluster. I tried FoxyProxy to view the logs for the instances and neither does anything show up there.
Could you please let me know where I am going wrong?
Thanks!
How are you writing the text file locally? Generally, EMR jobs save their output to S3, so you could use something like outputRDD.saveToTextFile("s3n://<MY_BUCKET>"). You could also save the output to HDFS, but storing the results to S3 is useful for "ephemeral" clusters-- where you provision an EMR cluster, submit a job, and terminate upon completion.
"While running the programs on my local machine, I am able to see the
output properly. But the same doesn't work on EMR. The S3 folder only
shows empty files"
For the benefit of newbies:
If you are printing output to the console, it will be displayed in local mode but when you execute on EMR cluster, the reduce operation will be performed on worker nodes and they cant right to the console of the Master/Driver node!
With proper path you should be able to write results to s3.

Move Amazon EC2 AMIs between regions via web-interface?

Any easy way to move and custom AMI image between regions? (tokyo -> singapore)
I know you can mess up with API and S3 to get it done, but there there any easier way to do it?
As of December, 2012, Amazon now supports migrating an AMI to another region through the UI tool (Amazon Management Console). See their documentation here
So, how I've done it is..
From the AMI find out the Snapshot-ID and how it is attached (e.g. /dev/sda1)
Select the Snapshot, click "Copy", set Destination region and make the copy (takes a while!)
Select the new Snapshot, click "Create Image"
Architecture: (choose 32 or 64 bit)
Name/Description: (give it one)
Kernel ID: when migrating a Linux AMI, if you choose "default" it may fail. What worked for me was to go to the Amazon Kernels listing here to find the kernels Amazon supports, then specify it when creating the image)
Root Device Name: /dev/sda1
Click "Yes, Create"
4.Launch an instance from the new AMI and test that you can connect.
You can do it using Eric's post:
http://alestic.com/2010/10/ec2-ami-copy
The following assumes your AWS Console utilities are installed in /opt/aws/bin/, JAVA_HOME=/usr and you are running i386 architecture, otherwise replace with x86_64.
1) Run a live snapshot, where you believe your image can fit in 1.5GB and you have that to spare in /mnt (check running df)
/opt/aws/bin/ec2-bundle-vol -d /mnt -k /home/ec2-user/.ec2/pk-XXX.pem -c /home/ec2-user/.ec2/cert-XXX.pem -u 123456789 -r i386 -s 1500
2) Upload to current region's S3 bucket
/opt/aws/bin/ec2-upload-bundle -b S3_BUCKET -m /mnt/image.manifest.xml -a abcxyz -s SUPERSECRET
3) Transfer the image to EU S3 bucket
/opt/aws/bin/ec2-migrate-image -K /home/ec2-user/.ec2/pk-XXX.pem -C /home/ec2-user/.ec2/cert-XXX.pem -o abcxyz -w SUPERSECRET --bucket S3_BUCKET_US --destination-bucket S3_BUCKET_EU --manifest image.manifest.xml --location EU
4) Register your AMI so you can fire up the instance in Ireland
/opt/aws/bin/ec2-register –K /home/ec2-user/.ec2/pk-XXX.pem –C /home/ec2-user/.ec2/cert-XXX.pem http://s3.amazonaws.com:80/S3_BUCKET/image.manifest.xml --region eu-west-1 -name DEVICENAME -a i386 --kernel aki-xxx
There are API tools for this. http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-MigrateImage.html
I think that is now outdated by ec2-bundle-vol and ec2-migrate-image, BTW you can also take a look at this Perl script by Lincoln D. Stein:
http://search.cpan.org/~lds/VM-EC2/bin/migrate-ebs-image.pl
Usage:
$ migrate-ebs-image.pl --from us-east-1 --to ap-southeast-1 ami-123456
Amazon have just announced support for this functionality in this blog post. Note that the answer from dmohr relates to copying EBSs, not AMIs.
In case the blog post is unavailable, quoting the relevant parts:
To use AMI Copy, simply select the AMI to be copied from within the
AWS Management Console, choose the destination region, and start the
copy. AMI Copy can also be accessed via the EC2 Command Line
Interface or EC2 API as described in the EC2 User’s Guide. Once the
copy is complete, the new AMI can be used to launch new EC2 instances
in the destination region.
AWS now supports the copy of an EBS snapshot to another region via UI/CLI/API. You can copy the snapshot and then make an AMI from it. Direct AMI copy is coming - from AWS:
"We also plan to launch Amazon Machine Image (AMI) Copy as a follow-up
to this feature, which will enable you to copy both public and
custom-created AMIs across regions.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-copy-snapshot.html?ref_=pe_2170_27415460
Ylastic allows you to move EBS backed linux images between regions.
Its $25 or $50 per month but it looks like you can evaluate it for a week.
I just did this using a script on CloudyScripts, worked fantastically: https://cloudyscripts.com/tool/show/5 (and it's free).
As of 2017, it's pretty simple.. just follow the screenshots:
I'll add Scalr to the list of tools you can use (Disclaimer: I work there). Within Scalr, you can create your own AMI (we call them roles). Once your role is ready, you just have to choose where you want to deploy it (so in any regions).
Scalr is open-source released under the Apache 2 license: you can to download it and install it yourself. Otherwise, it is also available through a hosted version including support. Alternatives to Scalr includes RightScale and enStratus.