Are there any steps of configuring Spinnaker/Halyard to work on Titus based cluster? - https://netflix.github.io/titus/
There aren't any steps described in the documentation: https://www.spinnaker.io/setup/install/providers/
Also, check this Github issue: https://github.com/spinnaker/spinnaker.github.io/issues/869
There is a sample config in the github repo:
titus:
enabled: true
awsVpc: vpc0 # this is the default vpc used by titus
accounts:
- name: titusdevint
environment: test
discovery: "http://discovery.compary.com/v2"
discoveryEnabled: true
registry: testregistry # reference to the docker registry being used
awsAccount: test # aws account underpinning
autoscalingEnabled: true
loadBalancingEnabled: false # load balancing will be released at a later date
regions:
- name: us-east-1
url: https://myTitus.us-east-1.company.com/
port: 7104
autoscalingEnabled: true
loadBalancingEnabled: false
- name: eu-west-1
url: https://myTitus.eu-west-1.company.com/
port: 7104
autoscalingEnabled: true
loadBalancingEnabled: false
https://github.com/spinnaker/clouddriver/tree/master/clouddriver-titus
Right now you'll have to edit clouddriver.yml manually and then update via halyard
Related
If I understand right from Apply Pod Security Standards at the Cluster Level, in order to have a PSS (Pod Security Standard) as default for the whole cluster I need to create an AdmissionConfiguration in a file that the API server needs to consume during cluster creation.
I don't see any way to configure / provide the AdmissionConfiguration at CreateCluster , also I'm not sure how to provide this AdmissionConfiguration in a managed EKS node.
From the tutorials that use KinD or minikube it seems that the AdmissionConfiguration must be in a file that is referenced in the cluster-config.yaml, but if I'm not mistaken the EKS API server is managed and does not allow to change or even see this file.
The GitHub issue aws/container-roadmap Allow Access to AdmissionConfiguration seems to suggest that currently there is no possibility of providing AdmissionConfiguration at creation, but on the other hand aws-eks-best-practices says These exemptions are applied statically in the PSA admission controller configuration as part of the API server configuration
so, is there a way to provide PodSecurityConfiguration for the whole cluster in EKS? or I'm forced to just use per-namespace labels?
See also Enforce Pod Security Standards by Configuration the Built-in Admission Controller and EKS Best practices PSS and PSA
I don't think there is any way currently in EKS to provide configuration for the built-in PSA controller (Pod Security Admission controller).
But if you want to implement a cluster-wide default for PSS (Pod Security Standards) you can do that by installing the the official pod-security-webhook as a Dynamic Admission Controller in EKS.
git clone https://github.com/kubernetes/pod-security-admission
cd pod-security-admission/webhook
make certs
kubectl apply -k .
The default podsecurityconfiguration.yaml in pod-security-admission/webhook/manifests/020-configmap.yaml allows EVERYTHING so you should edit it and write something like
apiVersion: v1
kind: ConfigMap
metadata:
name: pod-security-webhook
namespace: pod-security-webhook
data:
podsecurityconfiguration.yaml: |
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
# Array of authenticated usernames to exempt.
usernames: []
# Array of runtime class names to exempt.
runtimeClasses: []
# Array of namespaces to exempt.
namespaces: ["policy-test2"]
then
kubectl apply -k .
kubectl -n pod-security-webhook rollout restart deployment/pod-security-webhook # otherwise the pods won't reread the configuration changes
After those changes you can verify that the default forbids privileged pods with:
kubectl --context aihub-eks-terraform create ns policy-test1
kubectl --context aihub-eks-terraform -n policy-test1 run --image=ecerulm/ubuntu-tools:latest --rm -ti rubelagu-$RANDOM --privileged
Error from server (Forbidden): admission webhook "pod-security-webhook.kubernetes.io" denied the request: pods "rubelagu-32081" is forbidden: violates PodSecurity "restricted:latest": privileged (container "rubelagu-32081" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "rubelagu-32081" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "rubelagu-32081" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "rubelagu-32081" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "rubelagu-32081" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Note: that you get the error forbidding privileged pods even when the namespace policy-test1 has no label pod-security.kubernetes.io/enforce, so you know that this rule comes from the pod-security-webhook that we just installed and configured.
Now if you want to create a pod you will be forced to create in a way that complies with the restricted PSS, by specifying runAsNonRoot, seccompProfile.type and capabilities and For example:
apiVersion: v1
kind: Pod
metadata:
name: test-1
spec:
restartPolicy: Never
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: test
image: ecerulm/ubuntu-tools:latest
imagePullPolicy: Always
command: ["/bin/bash", "-c", "--", "sleep 900"]
securityContext:
privileged: false
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
I have a lambda s3 trigger and a corresponding s3 bucket in my serverless.yaml which works perfectly when I deploy it via serverless deploy.
However when I want to remove everything with serverless remove I always get the same error: (even without changing anything in the aws console)
An error occurred: DataImportCustomS31 - Received response status [FAILED] from custom resource. Message returned: The specified bucket does not exist
Which is strange because I never specified a bucket with that name in my serverless. I assume this somehow comes from the existing: true property of my s3 trigger but I can't fully explain it nor do I know how to fix it.
this is my serverless.yaml:
service: myTestService
provider:
name: aws
runtime: nodejs12.x
region: eu-central-1
profile: myprofile
stage: dev
stackTags:
owner: itsme
custom:
testDataImport:
bucketName: some-test-bucket-zxzcq234
# functions
functions:
dataImport:
handler: src/functions/dataImport.handler
events:
- s3:
bucket: ${self:custom.testDataImport.bucketName}
existing: true
event: s3:ObjectCreated:*
rules:
- suffix: .xlsx
tags:
owner: itsme
# Serverless plugins
plugins:
- serverless-plugin-typescript
- serverless-offline
# Resources your functions use
resources:
Resources:
TestDataBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: Private
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
BucketName: ${self:custom.testDataImport.bucketName}
VersioningConfiguration:
Status: Enabled
I'm new to Kubernetes, and am playing with eksctl to create an EKS cluster in AWS. Here's my simple manifest file
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
name: sandbox
region: us-east-1
version: "1.18"
managedNodeGroups:
- name: ng-sandbox
instanceType: r5a.xlarge
privateNetworking: true
desiredCapacity: 2
minSize: 1
maxSize: 4
ssh:
allow: true
publicKeyName: my-ssh-key
fargateProfiles:
- name: fp-default
selectors:
# All workloads in the "default" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: default
# All workloads in the "kube-system" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: kube-system
- name: fp-sandbox
selectors:
# All workloads in the "sandbox" Kubernetes namespace matching the
# following label selectors will be scheduled onto Fargate:
- namespace: sandbox
labels:
env: sandbox
checks: passed
I created 2 roles, EKSClusterRole for cluster management, and EKSWorkerRole for the worker nodes? Where do I use them in the file? I'm looking at eksctl Config file schema page and it's not clear to me where in manifest file to use them.
As you mentioned, it's in the managedNodeGroups docs
managedNodeGroups:
- ...
iam:
instanceRoleARN: my-role-arn
# or
# instanceRoleName: my-role-name
You should also read about
Creating a cluster with Fargate support using a config file
AWS Fargate
I am newbee to serverless stack,Following is the serverless.yml file. On deploying this in GitLab I get error as:
Serverless Error ---------------------------------------
An error occurred: S3XPOLLBucket - bucket already exists.
Serverless.yml file is :
service: sa-s3-resources
plugins:
- serverless-s3-sync
- serverless-s3-remover
custom:
basePath: sa-s3-resources
environment: ${env:ENV}
provider:
name: aws
stage: ${env:STAGE}
region: ${env:AWS_DEFAULT_REGION}
environment:
STAGE: ${self:provider.stage}
resources:
Resources:
S3XPOLLBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: gs-sa-xpoll-file-${self:custom.environment}-${self:provider.stage}
S3JNLBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: gs-sa-jnl-file-${self:custom.environment}-${self:provider.stage}
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted.
That means you have to choose a unique name that has not already chosen by someone else (or even you in a different development stack) globally
More details
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html
Here all the properties file are in github location,so that I am able to read using uri path ,how I will read if It's in my local system.Can anybody please guide ?
server:
port: 8888
eureka:
instance:
hostname: configserver
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://discovery:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/****/******
You need to use spring cloud config in native mode, e.g.
spring:
cloud:
config:
server:
bootstrap: true
native:
search-locations: file:///C:/ConfigData
See the following link for more information:
http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html#_file_system_backend