Setting up Spinnaker on Kubernetes and accessing spinnaker UI - spinnaker

I have deployed the individual spinnaker components to kubernetes and when I am trying to access spinnaker through http://localhost:9000 I get an empty response from the server. I verified the configuration for clouddriver-local.yml, spinnaker-local.yml and everything seems good. Am i missing anything here? when I am trying to curl localhost:9000, I get an empty response from the server
here is the kubernetes setup info

Hi Spinnaker has evolved by this time and it should be easier to set up by now. If you want to do PoC only or deploy to small enterprise projects then i suggest you use Armory's Minnaker
Now if you want to deploy large projects to a robust and fully enhanced kubernetes cluster then that is a different story and the steps are as it follows:
Minimum 4 CPUs and 12 GB of memory
Access to an existing object storage bucket
Access to an IAM role or user with access to the bucket. (AWS IAM for AWS S3)
An existing Kubernetes Ingress controller or the permissions to install the NGINX Ingress Controller (ForDeck UI access)
Installation
Create a Kubernetes namespace for Spinnaker and Halyard
Grant the default ServiceAccount in the namespace access to the cluster-admin ClusterRole in the namespace.
Run Halyard (Spinnaker installer) as a Pod in the created namespace (with a StatefulSet).
Create a storage bucket for Spinnaker to store persistent configuration in.
Create an user (AWS IAM in case of AWS deployment) that Spinnaker will use to access the bucket (or alternately, granting access to the bucket via roles).
Rung hal client interactively in the Kubernetes Pod:
Build out the hal config YAML file (.hal/config)
Configure Spinnaker with the IAM credentials and bucket information
Turn on other recommended settings (artifacts and http artifact providers: github, bitbucket, etc)
Install Spinnaker hal deploy
Expose Spinnaker (Deck through ingress)
For more details refer to
Armory's doc
Spinnaker Distributed installation in Kubernetes
Hope the guideline helps

Related

using cloud code plugin for GCP (google cloud platform)

I've a local clusters (minikube) that work pefectly well on my laptop (mint 19.3, Intellij 2019.3 with cloud code plugin, java (11) backend, mongo db, front end, .. ok ). But I can't find any usefull informations (on google cloud plateform site or intellij) to configure a new google cloud cluster. I can only see my minikube conf on the cluster explorer...even when I stopped minikube !
It seems that configuration could be found in kubctl !? But how can I force plugin to connect GCP. I've a GCP account and created a cluster and an image repo.
GCP documentation looks really unclear.
I solved the problem. You need to install SDK cloud ( an other solution ?), an use gcloud instructions to link kubctl with new kubernetes context, and for credentials contexts. a new configuration for kubctl must be generated, and you have to switch to that configuration (kubectl config set your-new-cluster).
Just one thing, to use google storage for docker images, you should enter where to find or put it in the conf of the run/edit configuration line image options -> gcr.io/your-project-id . I couldn't use the bucket i created before pushing, a new one was created. Is there a solution to connect with an existing bucket ?
If you want to manage your clusters from an on-prem machine you will need to install Cloud SDK and configure your cluster access, this will allow you to use kubectl comands to create, and administrate the clusters on GKE. Cloud code plugin should install this SDK automatically, you can take a look to this guide to learn hoy to use it.

How to dynamically create Airflow S3 connection using IAM service

My Airflow application is running in AWS EC2 instance which has IAM role as well. Currently I am creating Airflow S3 connection using hardcoded access and secret key. But I want my application to pickup this AWS credentials from this instance itself.
How to achieve this?
We have a similar setup, our Airflow instance run inside containers deployed inside an EC2 machine. We set up the policies to access S3 on the EC2 machine instance profile. You don't need to pick up the credentials in the EC2 machine, because the machine has an instance profile that should have all the permissions that you need. From the Airflow side, we only use aws_default connection, in the extra parameter we only setup the default region, but there aren't any credentials.
Here a details article about Intance Profiles: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
The question is answered but for future reference, it is possible to do it without relying on aws_default and just doing it via Environment Variables. Here is an example to write logs to s3 using an AWS connection to benefit form IAM:
AIRFLOW_CONN_AWS_LOG="aws://"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID=aws_log
AIRFLOW__CORE__REMOTE_LOGGING=true
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER="s3://path to bucket"

Spinnaker configuration

I'm having question about spinnaker-Halyard installation, Can spinnaker manage AWS cloud provider without being installed on EC2 instance?. meaning that can I install spinnaker locally and add aws account and manage pipelines
Can spinnaker manage AWS cloud provider without being installed on EC2 instance?
Spinnaker can be installed on any Ubuntu server - for example, you could run a Spinnaker instance from Google's Click to Deploy image and have it manage your EC2 account.
Spinnaker is comprised of a bunch of microservices, so running it on a local workstation may be cumbersome. I suggest dedicating a specific machine to it. Alternatively, if you're set on running it locally, you could install Halyard locally and point it to a Minikube installation on your machine.
You can setup the these many providers under your spinnaker setup
https://www.spinnaker.io/setup/install/providers/
App Engine
Amazon Web Services
Azure
Cloud Foundry
DC/OS Google
Compute Engine
Kubernetes (legacy)
Kubernetes V2 (manifest based)
Openstack Oracle
You just need to integrate your service accounts into spinnaker to authorize resource creation.
Yes It will work just you need to create service account and Need to pass kubeconfig file to spinnaker, then spinnaker handle Deployment part automatically, you need to configure spinnaker for that.
Some useful link
https://www.spinnaker.io/setup/security/authorization/service-accounts/
https://www.spinnaker.io/setup/

How do you make an Express.js API accessible from the Internet?

I have an Express API server running on localhost on my own machine. How do I make it accessible from the Internet and not just my own machine?
Preferably, it would be deployed on AWS.
In AWS there are multiple ways of hosting your express application based on flexibility vs convenience.
AWS Elastic Beanstalk:
This will provide you more convenience by creating an autoscaling and loadbalancing environment with version management and roll back support from one place in AWS web console. Also provide you IDE support for deployments and CLI commands for CI/CD support.
AWS ECS:
If you plans to dockerize your application(Which I highly recommend) you can use AWS ECS to manage your docker cluster with container level Autoscaling and loadbalancing support for more convenience. This also provides CLI for CI/CD.
AWS EC2:
If you need more flexibility, you can get a virtual server in AWS and also manually configure autoscaling and loadbalancing which I prefer as the least option simply for a web app since you have to do most of the things manually.
All this services will provide you with publicly accessible URL if you configure them properly to grant access from outside. You need to configure networking and security groups properly either exposing the loadbalancer or instance IP/DNS URL to the outside.

Access S3 in cron job in docker on Elastic Beanstalk

I have a cron job in a docker image that I deploy to elastic beanstalk. In that job I wish to include read and write operations on S3 and have included the AWS CLI tools for that purpose.
But AWS CLI isn't very useful without credentials. How can I securely include the AWS credentials in the Docker image, such that that AWS CLI will work? Or should I take some other approach?
Always try to avoid setting credentials on machines if they run within AWS.
Do the following:
Go into the IAM console and create an IAM role, then edit the policy of that role to have appropriate S3 read/write permissions.
Then go the Elastic Beanstalk console, find your environment and go to the the configuration/instances section. Set the "instance profile" to use the role you created (a profile is associated with a role, you can see it in the IAM console when you're viewing the role).
This will mean that each beanstalk EC2 instance will have the permissions you set in the IAM role (the AWS CLI will automatically use the instance profile of the current machine if available).
More info:
http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html#use-roles
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html