Does Heptio Authenticator be deployed automatically when creating EKS Cluster? - authentication

I have done the below steps.
Created an EKS Cluster
Installed aws-iam-authenticator client binary
Execute "aws eks update-kubeconfig --name <cluster_name>"
Execute "kubectl get svc"
I am able to view the services available in my cluster. When I see ~/.kube/config file it is using an external command called "aws-iam-authenticator".
My understanding is that "aws-iam-authenticator" uses my ~/.aws/credentials and retrieves the token from AWS(aws-iam-authenticator token -i cluster-1) and uses that token for "kubectl get svc" command. Is my understanding correct?
If my understanding correct, where does heptio comes into picture in this flow? Does Heptio Authenticator be deployed automatically when creating the EKS Cluster?

Basically, Heptio authenticator = aws-iam-authenticator.
You can check the details on here. If your aws-iam-authenticator is working fine, then you don't need to care about heptio additionally. They just renamed it.

Related

Serverless Framework deploy through CircleCI

I'm trying to integrate serverless to my circleci workflow.
I tried first adding both, key and secret to AWS permissions, but that did not work.
Then, I added key and secret to Environment variables and in my config file:
sudo npm install -g serverless
sls config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
sls deploy -v
But I see the same error:
Serverless Error ---------------------------------------
You are not currently logged in. Follow instructions in http://slss.io/run-in-cicd to setup env vars for authentication.
Anyone had this issue? I could not find an answer or hint online. Thanks.
This likely only applies to those trying to use Serverless Enterprise with the monitoring & dashboards they have set up. #wintvelt's answer wouldn't work for me because if i deleted the org variable, it would likely break the connection needed for Enterprise. So steps for my CircleCI setup:
In CircleCI, create a Context for each environment with the AWS Key ID and Secret as environment variables (putting them in a context is a nice to have, you could use other methods of making Circle inject environment variables into builds).
In your Serverless Framework dashboard, create a new access key which you will use in Circle.
Create a new environment variable SERVERLESS_ACCESS_KEY with the value from step 2.
I got this idea from reading how Seed.run has users integrate with Serverless. For more info read this link: https://seed.run/docs/integrating-with-serverless-enterprise.
Just checked Circleci stopped supporting AWS Permissions as a configurable option in the settings page.
You need to set the credentials as environment variables for the projects. The credentials should be named exactly AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
that's all you need to do. you don't have to do any additional step. I tried this on my project and it worked.
Your deployment step should simply be
sls deploy
As a follow-up to the previous answer: I had exactly the same error.
I took the solution from the chat as a solution.
For me the fixes I applied:
In CircleCI project settings, under "AWS permissions" I added the AWS Access Key ID and Secret Access key
In CircleCI project settings, under "Environment variables", I also added the AWS Access Key ID and Secret Access key
From my serverless.yml file, I deleted the line with org variable
For me, 1. and 2. alone was not enough. I also had to remove the line from my yml file to make deployment via CircleCI work.
For those landing here with the same issue, hope this helps!

Spinnaker Support for App ELB in AWS

Am facing 2 issues with Spinnaker new installation.
I could not see my Application load balancers listed in dropdown of load balancers tab while creating pipeline. We are currently using only app. load balancers in our current set up. I tried editing the JSON file of pipeline with below config and it didn't work. I verfied it by checking the ASG created in my AWS account and checked if there is any ELB/Target group associated but I couldn't see any.
"targetGroups": [
"TG-APP-ELB-NAME-DEV"
],
Hence, I would like to confirm how I can get support of App. ELB into Spinnaker installation and how to use it.
Also I have an ami search issue found.My current set up briefing is below.
One managing account - prod where my spinnaker ec2 is running & my prod application instances are running
Two managed accounts - dev & test where my application test instances are running.
When I create a new AMI in my dev AWS account and am trying to search the newly created AMI from my Spinnaker and it failed with error that it couldn't search the AMI first. Then I shared my AMI in dev to prod after which it was able to search it but failed with UnAuthorized error
Please help me clarify
1. If sharing is required for any new AMI from dev -> Prod or our spinnakerManaged role would take care of permissions
2. How to fix this problem and create AMI successfully.
Regarding #1, have you created the App Load Balancer through the Spinnaker UI or directly through AWS?
If it is the former, then make sure it follows the naming convention agreed by Spinnaker (I believe the balancer name should start with the app name)

Is there an Ansible module for creating 'instance-store' based AMI's?

Creating AMI's from EBS backed instances is exceedingly easy, but doing the same from an instance-store based instance seems like it can only be done manually using the CLI.
So far I've been able to bootstrap the creation of an 'instance-store' based server off of an HVM Amazon Linux AMI with Ansible, but I'm getting lost on the steps that follow... I'm trying to follow this: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-instance-store-ami.html#amazon_linux_instructions
Apparently I need to store my x.509 cert and key on the instance, but which key is that? Is that...
one I have to generate on the instance with openssl,
one that I generate/convert from AWS,
one I generate with Putty, or
one that already exists in my AWS account?
After that, I can't find any reference to ec2-bundle-vol in Ansible. So I'm left wondering if the only way to do this is with Ansible's command module.
Basically what I'm hoping to find out is: Is there a way to easily create instance-store based AMI's using Ansible, and if not, if anyone can reference the steps necessary to automate this? Thanks!
Generally speaking, Ansible AWS modules are meant to manage AWS resources by interacting with AWS HTTP API (ie. actions you could otherwise do in the AWS Management Console).
They are not intended to run AWS specific system tools on EC2 instances.
ec2-bundle-vol and ec2-upload-bundle must be run on the EC2 instance itself. It is not callable via the HTTP API.
I'm afraid you need to write a custom playbook / role to automate the process.
On the other hand, aws ec2 register-image is an AWS API call and correspond to the ec2_ami Ansible module.
Unfortunately, this module doesn't seem to support image registering from an S3 bucket.

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

(EC2) Launch Windows instance programmatically via command line

I'd like to launch a Windows 2008 (64bits, base install) instance programmatically, kinda like clicking on the Launch Instance link & following the "Create a New Instance" wizard.
I read about this command ec2-run-instances, I tried running it on putty using this syntax:
/opt/aws/bin/ec2-run-instances ami_id ami-e5784391 -n 1
--availability-zone eu-west-1a --region eu-west-1 --instance-type m1.small --private-key /full/path/MyPrivateKey.pem --group MyRDP
but it always complain that:
Required option '-C, --cert CERT' missing (-h for usage)
According to the documentation, this option isn't required!!
Can someone tell me what's wrong anyway? I'm just trying to programmatically launch a fresh Windows install, run some tests on the clouds & shut it down after that.
The error message is correct (just try adding --cert ;) - to what documentation are you referring here?
The requirement is clearly outlined in the Microsoft Windows Guide for Amazon EC2, specifically in Task 4: Set the EC2_PRIVATE_KEY and EC2_CERT Environment Variables:
The command line tools need access to an X.509 certificate and a
corresponding private key that are associated with your account. [...]
You can either specify your credentials with the --private-key and
--cert parameters every time you issue a command or you can create environment variables that point to the credential files on your local
system. If the environment variables are properly configured, you can
omit the parameters when you issue a command.
[emphasis mine]
Maybe the option of using environment variables has been misleading somehow somewhere?
Alternative
Please note that you can ease and speed up working with EC2 considerably by using alternate scripting environments covering the same ground, in particular the excellent boto, which is a Python package that provides interfaces to Amazon Web Services.
Boto uses the nowadays more common authentication scheme based on access keys only rather than X.509 certificates (e.g. an AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY pair), which furthermore can (and should) be managed via AWS Identity and Access Management (IAM) to avoid the risk of exposing your main AWS account credentials in the first place. See my answer to How to download an EC2 X.509 certificate with an IAM User account? for more details on this.
Good luck!