Access S3 in cron job in docker on Elastic Beanstalk - amazon-s3

I have a cron job in a docker image that I deploy to elastic beanstalk. In that job I wish to include read and write operations on S3 and have included the AWS CLI tools for that purpose.
But AWS CLI isn't very useful without credentials. How can I securely include the AWS credentials in the Docker image, such that that AWS CLI will work? Or should I take some other approach?

Always try to avoid setting credentials on machines if they run within AWS.
Do the following:
Go into the IAM console and create an IAM role, then edit the policy of that role to have appropriate S3 read/write permissions.
Then go the Elastic Beanstalk console, find your environment and go to the the configuration/instances section. Set the "instance profile" to use the role you created (a profile is associated with a role, you can see it in the IAM console when you're viewing the role).
This will mean that each beanstalk EC2 instance will have the permissions you set in the IAM role (the AWS CLI will automatically use the instance profile of the current machine if available).
More info:
http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html#use-roles
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html

Related

.Net core not able to connect to AWS S3 bucket

I have .Net core code to write file in AWS S3 bucket. This code is getting called from Hangfire as a Job. It works fine in my machine but when I deploy solution to cloud foundry as docker container, code throws error "connection refused" at line where it connects to AWS. Exception doesn't have much information.
I did setup proxy environment in docker file and I can verify proxy by ssh to container in cloud foundry. By ssh when I run curl command to AWS S3 bucket it gives "Access Denied" error which confirms that docker container has no firewall issue accessing AWS.
I am not able to figure out why Hangfire job can't connect to AWS. Any suggestions.

Deploy web app using Elastic Beantalk without auto-scaling

I'm trying to deploy a node.js application into aws using Elastic BeansTalk. However, while creating a web server environment, it fails because my account does not havve the permission to use auto scaling due to cost. Is there a way to disable autoscaling completely with elastic beanstalk?
Yes, just create a single instance environment.
As per the AWS docs, single instance environments do not have load balancers or auto-scaling. You can read more about Elastic Beanstalk environment types here:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-types.html?ref_=pe_395030_31184250_9

How to dynamically create Airflow S3 connection using IAM service

My Airflow application is running in AWS EC2 instance which has IAM role as well. Currently I am creating Airflow S3 connection using hardcoded access and secret key. But I want my application to pickup this AWS credentials from this instance itself.
How to achieve this?
We have a similar setup, our Airflow instance run inside containers deployed inside an EC2 machine. We set up the policies to access S3 on the EC2 machine instance profile. You don't need to pick up the credentials in the EC2 machine, because the machine has an instance profile that should have all the permissions that you need. From the Airflow side, we only use aws_default connection, in the extra parameter we only setup the default region, but there aren't any credentials.
Here a details article about Intance Profiles: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
The question is answered but for future reference, it is possible to do it without relying on aws_default and just doing it via Environment Variables. Here is an example to write logs to s3 using an AWS connection to benefit form IAM:
AIRFLOW_CONN_AWS_LOG="aws://"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID=aws_log
AIRFLOW__CORE__REMOTE_LOGGING=true
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER="s3://path to bucket"

Spinnaker configuration

I'm having question about spinnaker-Halyard installation, Can spinnaker manage AWS cloud provider without being installed on EC2 instance?. meaning that can I install spinnaker locally and add aws account and manage pipelines
Can spinnaker manage AWS cloud provider without being installed on EC2 instance?
Spinnaker can be installed on any Ubuntu server - for example, you could run a Spinnaker instance from Google's Click to Deploy image and have it manage your EC2 account.
Spinnaker is comprised of a bunch of microservices, so running it on a local workstation may be cumbersome. I suggest dedicating a specific machine to it. Alternatively, if you're set on running it locally, you could install Halyard locally and point it to a Minikube installation on your machine.
You can setup the these many providers under your spinnaker setup
https://www.spinnaker.io/setup/install/providers/
App Engine
Amazon Web Services
Azure
Cloud Foundry
DC/OS Google
Compute Engine
Kubernetes (legacy)
Kubernetes V2 (manifest based)
Openstack Oracle
You just need to integrate your service accounts into spinnaker to authorize resource creation.
Yes It will work just you need to create service account and Need to pass kubeconfig file to spinnaker, then spinnaker handle Deployment part automatically, you need to configure spinnaker for that.
Some useful link
https://www.spinnaker.io/setup/security/authorization/service-accounts/
https://www.spinnaker.io/setup/

Setting up Spinnaker on Kubernetes and accessing spinnaker UI

I have deployed the individual spinnaker components to kubernetes and when I am trying to access spinnaker through http://localhost:9000 I get an empty response from the server. I verified the configuration for clouddriver-local.yml, spinnaker-local.yml and everything seems good. Am i missing anything here? when I am trying to curl localhost:9000, I get an empty response from the server
here is the kubernetes setup info
Hi Spinnaker has evolved by this time and it should be easier to set up by now. If you want to do PoC only or deploy to small enterprise projects then i suggest you use Armory's Minnaker
Now if you want to deploy large projects to a robust and fully enhanced kubernetes cluster then that is a different story and the steps are as it follows:
Minimum 4 CPUs and 12 GB of memory
Access to an existing object storage bucket
Access to an IAM role or user with access to the bucket. (AWS IAM for AWS S3)
An existing Kubernetes Ingress controller or the permissions to install the NGINX Ingress Controller (ForDeck UI access)
Installation
Create a Kubernetes namespace for Spinnaker and Halyard
Grant the default ServiceAccount in the namespace access to the cluster-admin ClusterRole in the namespace.
Run Halyard (Spinnaker installer) as a Pod in the created namespace (with a StatefulSet).
Create a storage bucket for Spinnaker to store persistent configuration in.
Create an user (AWS IAM in case of AWS deployment) that Spinnaker will use to access the bucket (or alternately, granting access to the bucket via roles).
Rung hal client interactively in the Kubernetes Pod:
Build out the hal config YAML file (.hal/config)
Configure Spinnaker with the IAM credentials and bucket information
Turn on other recommended settings (artifacts and http artifact providers: github, bitbucket, etc)
Install Spinnaker hal deploy
Expose Spinnaker (Deck through ingress)
For more details refer to
Armory's doc
Spinnaker Distributed installation in Kubernetes
Hope the guideline helps