split yaml file based on kind - awk

I have a sample yaml file.I want to split it into multiple files depending on kind. All files with one type of kind should be in a single file. I have four different kinds and in each kind there are multiple configuration files.
---
apiVersion: v1
Kind: Appproject
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Appproject
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Appproject
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Appproject
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: secret
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
---
apiVersion: v1
Kind: secret
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: secret
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: secret
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
I should get three different files - application.yaml , secret.yaml, AppProject.yaml.
Expected content of Application.yaml-
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Application
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
Expected content of AppProject.yaml-
apiVersion: v1
Kind: Appproject
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Appproject
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Appproject
Metadata:
name: hjgy
Spec:
Ghjjhkjh:
---
apiVersion: v1
Kind: Appproject
Metadata:
name: hjgy
Spec:
Ghjjhkjh:

Related

Certificate not issued by clusterIssuer EKS

I have tried using jetstack/cert-manager to secure my application launched on EKS but I still see a Not Secure I am not sure what i missed. Here is what i have done
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: something#gmail.com
privateKeySecretRef:
name: letsencrypt-production
solvers:
- http01:
ingress:
class: nginx
My manifest looks as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
selector:
app: wordpress
ports:
- protocol: TCP
port: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-production
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress
port:
number: 80
tls:
- hosts:
- mydomain.com
secretName: letsencrypt-production
When i do
kubectl describe certificate letsencrypt-production
I dont see anything under events like Issued or Requested
Status:
Conditions:
Last Transition Time: 2022-12-22T06:04:30Z
Message: Certificate is up to date and has not expired
Observed Generation: 1
Reason: Ready
Status: True
Type: Ready
Not After: 2023-03-21T11:04:22Z
Not Before: 2022-12-21T11:04:23Z
Renewal Time: 2023-02-19T11:04:22Z
Events: <none>
When i open my domain i see NET::ERR_CERT_AUTHORITY_INVALID
What did i miss any help ?
I can get it to work by creating a cluster-issuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <my_email_id>
privateKeySecretRef:
name: letsencrypt-production
solvers:
- http01:
ingress:
class: nginx
creating an ingress resource as follows.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-production
spec:
rules:
- host: mydomain.com
http:
paths:
- backend:
service:
name: wordpress
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- mydomain.com
secretName: letsencrypt-production

Simple front end application on EKS via AppMesh

Kindly ask you to help to find out the problem with my configuration.
It was done on the scope of AWS WorkShop example just rewrite on another HTTP container.
Right now, after implementation of this, everything is up, but when going on NLB getting "no healthy upstream".
Have checked the logs, and see only 503 errors on my Gateway Ingress. Requests are not coming to my pod at all. Where I made mistake in my configuration?
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualGateway
metadata:
name: ingress-gw
namespace: shared
spec:
namespaceSelector:
matchLabels:
gateway: shared-gw
podSelector:
matchLabels:
app: ingress-gw
listeners:
- portMapping:
port: 8088
protocol: http
logging:
accessLog:
file:
path: /dev/stdout
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-echo-deployment
namespace: shared
labels:
app: httpd-echo1
spec:
replicas: 1
selector:
matchLabels:
app: httpd-echo1
template:
metadata:
labels:
app: httpd-echo1
annotations:
appmesh.k8s.aws/mesh: shared-mesh
spec:
containers:
- name: httpd
image: hashicorp/http-echo
args:
- "-text=test"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
namespace: shared
name: httpd-echo-service
labels:
app: httpd-echo1
spec:
ports:
- name: "http"
port: 5678
targetPort: 5678
selector:
app: httpd-echo1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: shared-virtual-node-1
namespace: shared
spec:
podSelector:
matchLabels:
app: httpd-echo1
listeners:
- portMapping:
port: 5678
protocol: http
healthCheck:
protocol: http
path: '/'
healthyThreshold: 5
unhealthyThreshold: 5
timeoutMillis: 2000
intervalMillis: 5000
serviceDiscovery:
dns:
hostname: httpd-echo1.test.com
logging:
accessLog:
file:
path: /dev/stdout
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: shared-virtual-service-1
namespace: shared
spec:
awsName: httpd-echo1.test.com
provider:
virtualNode:
virtualNodeRef:
name: shared-virtual-node-1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: GatewayRoute
metadata:
name: shared-gw-route-1
namespace: shared
spec:
httpRoute:
match:
prefix: "/"
action:
target:
virtualService:
virtualServiceRef:
name: shared-virtual-service-1
---
apiVersion: v1
kind: Service
metadata:
name: ingress-gw
namespace: shared
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-subnets : subnet-1,subnet-2,subnet-3
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8088
name: http
selector:
app: ingress-gw
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-gw
namespace: shared
spec:
replicas: 1
selector:
matchLabels:
app: ingress-gw
template:
metadata:
labels:
app: ingress-gw
spec:
containers:
- name: envoy
image: 422531588944.dkr.ecr.eu-south-1.amazonaws.com/aws-appmesh-envoy:v1.16.1.1-prod
ports:
- containerPort: 8088
Example which I have tried to use : https://github.com/aws-containers/eks-app-mesh-polyglot-demo/tree/cf15e0d8e10c019d332f5378d132a8d620131df8/deployment
I tried to reproduce the same at my side and it worked fine. There are couple of configuration changes I did to the above yaml.
Added the gateway label “gateway: shared-gw“ to the VirtualGateway. Make sure that you have this label in the namespace as well.
Corrected the dns hostname. This should be your application clusterIp service name
serviceDiscovery:
dns:
hostname: httpd-echo1.shared.svc.cluster.local
Also, ensure that your Laodbalancer is Active and the target group listener for this LB is showing healthy status
I am adding the updated yaml below. You can try this and see if it works.
---
apiVersion: v1
kind: Namespace
metadata:
name: shared
labels:
mesh: shared-mesh
gateway: ingress-gw
appmesh.k8s.aws/sidecarInjectorWebhook: enabled
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: shared-mesh
spec:
namespaceSelector:
matchLabels:
mesh: shared-mesh
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-echo1
namespace: shared
labels:
app: httpd-echo1
spec:
replicas: 1
selector:
matchLabels:
app: httpd-echo1
template:
metadata:
labels:
app: httpd-echo1
annotations:
appmesh.k8s.aws/mesh: shared-mesh
spec:
containers:
- name: httpd
image: hashicorp/http-echo
args:
- "-text=test"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
namespace: shared
name: httpd-echo1
labels:
app: httpd-echo1
spec:
ports:
- name: "http"
port: 5678
targetPort: 5678
selector:
app: httpd-echo1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: shared-virtual-node-1
namespace: shared
spec:
podSelector:
matchLabels:
app: httpd-echo1
listeners:
- portMapping:
port: 5678
protocol: http
healthCheck:
protocol: http
path: '/'
healthyThreshold: 5
unhealthyThreshold: 5
timeoutMillis: 2000
intervalMillis: 5000
serviceDiscovery:
dns:
hostname: httpd-echo1.shared.svc.cluster.local
logging:
accessLog:
file:
path: /dev/stdout
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: shared-virtual-service-1
namespace: shared
spec:
awsName: httpd-echo1.shared.svc.cluster.local
provider:
virtualNode:
virtualNodeRef:
name: shared-virtual-node-1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualGateway
metadata:
name: ingress-gw
namespace: shared
spec:
namespaceSelector:
matchLabels:
gateway: ingress-gw
podSelector:
matchLabels:
app: ingress-gw
listeners:
- portMapping:
port: 8088
protocol: http
logging:
accessLog:
file:
path: /dev/stdout
---
apiVersion: v1
kind: Service
metadata:
name: ingress-gw
namespace: shared
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8088
name: http
selector:
app: ingress-gw
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-gw
namespace: shared
spec:
replicas: 1
selector:
matchLabels:
app: ingress-gw
template:
metadata:
labels:
app: ingress-gw
spec:
containers:
- name: envoy
image: 422531588944.dkr.ecr.eu-south-1.amazonaws.com/aws-appmesh-envoy:v1.16.1.1-prod
ports:
- containerPort: 8088
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: GatewayRoute
metadata:
name: shared-gw-route-1
namespace: shared
spec:
httpRoute:
match:
prefix: "/"
action:
target:
virtualService:
virtualServiceRef:
name: shared-virtual-service-1
---

Canary Deployment Strategy using Argocd rollout and Service Mesh Interface (Traefik Mesh)

I'm working on the Canary Deployment Strategy.
I use the Service Mesh Interface, after installing trafik mesh.
When starting the program for the first time with the command
kubectl apply -f applications.yaml
It should deploy the entire application i.e. 4 replicas, but it deploys only 20% (1 replica) of the application,
and it goes into progressing state with an error:
TrafficRoutingErro: the server could not find the requested resource (post trafficsplits.splits.smi-spec.io)
TrafficSplitNotCreated: Unable to create traffic Split 'demo-traefficsplit'
Here is my manifest:
argocd-rollout.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: demo
labels:
app: demo
spec:
strategy:
canary:
steps:
- setWeight: 20
- pause:
duration: "1m"
- setWeight: 50
- pause:
duration: "2m"
canaryService: demo-canary
stableService: demo
trafficRouting:
smi:
rootService: demo-smi
trafficSplitName: demo-trafficsplit
replicas: 4
revisionHistoryLimit: 2
selector:
matchLabels:
app: demo
version: blue
template:
metadata:
labels:
app: demo
version: blue
spec:
containers:
- name: demo
image: argoproj/rollouts-demo:blue
imagePullPolicy: Always
ports:
- name: web
containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "140m"
---
apiVersion: split.smi-spec.io/v1alpha3
kind: TrafficSplit
metadata:
name: demo-trafficsplit
spec:
service: demo-smi # controller uses the stableService if Rollout does not specify the rootService field
backends:
- service: demo
weight: 10
- service: demo-canary
weight: 90
---
apiVersion: v1
kind: Service
metadata:
name: demo-smi
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo-canary
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: rollout-ing
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`mycompagny.com`)
services:
- name: demo-smi
port: 80
tls:
certResolver: myresolver
applications.yaml
apiVersion: v1
kind: Namespace
metadata:
name: net
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rollout
namespace: argocd
spec:
project: default
source:
repoURL: git#github.com:telemaqueHQ/DevOps.git
targetRevision: master
path: gitOps/test/argocd
destination:
server: https://kubernetes.default.svc
namespace: net
syncPolicy:
automated:
prune: true
selfHeal: true

Traefik 2.0 redirect middleware with Google Kubernetes Engine

I'm trying to test and implement Traefik's https redirect feature in my kubernetes cluster per Traefik's documentation: https://docs.traefik.io/middlewares/overview/. Here's the definition of the Middleware and IngressRoute:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`your.domain.name`) && Host(`www.your.domain.name`)
kind: Rule
services:
- name: traefik-dashboard
port: 8080
middlewares:
- name: redirectscheme
tls:
secretName: cloud-tls
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirectscheme
spec:
redirectScheme:
scheme: https
However, https://your.domain.name works and http://your.domain.name gives me a 404 page not found.
Does anyone know what have I misconfigured ?
that worked for me:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`example.com`)
kind: Rule
services:
- name: whoami
port: 80
tls:
certResolver: default
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-red
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`example.com`)
middlewares:
- name: test-redirectscheme
kind: Rule
services:
- name: whoami
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-www
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`www.example.com`)
kind: Rule
services:
- name: whoami
port: 80
tls:
certResolver: default
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-www-red
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`www.example.com`)
kind: Rule
middlewares:
- name: test-redirectscheme
services:
- name: whoami
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-redirectscheme
namespace: default
spec:
redirectScheme:
scheme: https

Failed to discover supported resources

I am trying to Create User With Limited Namespace Access. created namespace named as test and also created Group:programmers, User:frontend. Generated credentials for user:frontend by the help of following http://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
I created a role. Here is my role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: test
name: frontend-developer
rules:
- apiGroups: ["","extensions","apps"]
resources: ["deployments","replicasets","pods"]
verbs: ["get","list","watch","create","patch"]`
I created rolebinding. Here is role-binding.yml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: frontend-deploy
namespace: test
subjects:
- kind: User
name: frontend
namespace: test
roleRef:
kind: Role
name: frontend-developer
apiGroup: rbac.authorization.k8s.io
I am talking my deployment file as
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
replicas: 3
template:
metadata:
labels:
app: bookstore
spec:
containers:
- name: nodeweb
image: balipalligayathri/devops-develop
ports:
- name: http
containerPort: 3000
protocol: TCP
I am using following commands while creating the role and rolebinding
$ kubectl create -f role.yml
$ kubectl create -f role-binding.yml
frontend developer Role and frontend-deploy Rolebindings were created.
Again, i am using the command kubectl create -f node-deployment.yml for deployment creation. Deployment was created and deleted successfully. here, i didn't mention any user while creating the deployment.so, I am trying to create deployment with user by using the below command.
kubectl create -f node-deployment.yml --as=frontend --context=frontend-context
I am facing the error like this
Error from server (Forbidden):
<html><head><meta http-equiv='refresh' content='1;url=/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s'/><script>window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication requiredhttps://stackoverflow.com/questions/48164369/kubernetes- 1-8-dashboard-configurations-fails-with-error-no-kind-role-is-regi
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
which is implied by: hudson.security.Permission.GenericRead
which is implied by: hudson.model.Hudson.Administer </body></html>
My doubt is: is there any necessity to mention the user in deployment.yml file?
You need to create a serviceAccount, take a look at this snippet:
apiVersion: v1
kind: ServiceAccount
metadata:
name: myAccount
bind it to your role:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: myBinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: frontend-developer
subjects:
- kind: ServiceAccount
name: myAccount
and use it in your Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
template:
metadata:
labels:
...
spec:
serviceAccountName: myAccount
Ref:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/