Docker Containers as Jenkins Slave - selenium

I want to know are the following scenarios possible , please help me out:-
Scenario 1:-
I have my local system as a Jenkins Master and Every time I need A slave to run my automation test script , a docker container spins up as a Jenkins slave and my script is executed on the slave and after the execution is completed the container is destroyed .
Is this possible . I want to keep my local system as the Jenkins master .
Scenario 2:-
Can i spin up multiple containers as the Jenkins slave for local system as a Jenkins master.
Thanks

Scenario 1 is at least covered by the JENKINS/Kubernetes Plugin: see its README
Based on the Scaling Docker with Kubernetes article, automates the scaling of Jenkins agents running in Kubernetes.
But that requires a Kubernetes setup, which means, in your case (if you have only one machine), a minikube.

I have written a message on Scenario 1 (with Kubernetes) at this link:
Jenkins kubernetes plugin not working
Here the post.
Instead of using certificates, I suggest you to use credentials in kubernetes, by creating a serviceAccount:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
and deploying jenkins using that serviceAccount:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: jenkins
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins
....
I show you my screenshots for Kubernetes plugin (note Jenkins tunnel for the JNLP port, 'jenkins' is the name of my kubernetes service):
For credentials:
Then fill the fileds (ID will be autogenerated, description will be shown in credentials listbox), but be sure to have created serviceAccount in kubernetes as I said before:
My instructions are for the Jenkins master inside kubernetes. If you want it outside the cluster (but slaves inside) I think you have to use simple login/password credentials.
I hope it helps you.

Related

Unable to successfully setup TLS on a Multi-Tenant GKE+Istio with LetsEncrypt (via Cert Manager)

I'm trying configure for TLS (LetsEncrypt) on a multi-tenant GKE+Istio setup.
I mainly followed this guide -> Full Isolation in Multi-Tenant SAAS with Kubernetes & Istio for setting up the multi-tenancy in GKE+Istio, which I was able to successfully pull-off. I'm able to deploy simple apps on their separate namespaces which are accessible through their respective subdomains.
I then tried to move forward and setup the TLS with LetsEncrypt. For this I mainly followed a different guide which is can be found here-> istio-gke . But unfortunately, following this guide didn't produce the result I wanted. When I was done with it, LetsEncrypt aren't even issuing certificates to my deployment or domain.
Thus I tried to follow a different guide which is as follows -> istio-gateway-tls-setup. Here I managed to get LetsEncrypt issue a certificate for my domain, but when I tried to test it out with openssl or other online ssl checkers, it says that I still aren't communicating securely.
Below are the results when I try describe the configurations of my certificates, issuer & gateway:
Certificate: kubectl -n istio-system describe certificate istio-gateway
Issuer: kubectl -n istio-system describe issuer letsencrypt-prod
Gateway: kubectl -n istio-system describe gateway istio-gateway
While here's the dry-run results for my helm install <tenant>
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/cjcabero/projects/aqt-ott-msging-dev/gke-setup/helmchart
NAME: tenanta
LAST DEPLOYED: Wed Feb 17 21:15:08 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
frontend:
image:
pullPolicy: IfNotPresent
repository: paulbouwer/hello-kubernetes
tag: "1.8"
ports:
containerPort: 8080
service:
name: http
port: 80
type: ClusterIP
HOOKS:
MANIFEST:
---
# Source: helmchart/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: tenanta
labels:
istio-injection: enabled
---
# Source: helmchart/templates/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: tenanta
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: frontend
---
# Source: helmchart/templates/frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: tenanta
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello tenanta
---
# Source: helmchart/templates/virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tenanta-frontend-ingress
namespace: istio-system
spec:
hosts:
- tenanta.cjcabero.dev
gateways:
- istio-gateway
http:
- route:
- destination:
host: frontend.tenanta.svc.cluster.local
port:
number: 80
I don't understand how come even though LetsEncrypt seem to be able issue the certificate for my domain, it still aren't communicating securely.
Google Domains even managed to find that a certificate was issued in the domain in it's Transparency Report.
Anyway, I'm not sure if this could help, but I also tried the check the domain with an online ssl checker and here are the results -> https://check-your-website.server-daten.de/?q=cjcabero.dev.
By the way I did use Istio on GKE which results with Istio v1.4.10 & Kubernetes v1.18.15-gke.1100.

How to Enable IAP with GKE

I created a basic react/express app with IAP authentication and deployed to google app engine and everything works as expected. Now i'm moving from app engine deployment to kubernetes, all things work except the user authentication with IAP on kubernetes. How do I enable IAP user authentication with kubernetes?
Do I have to create a kubernetes secret to get user authentication to work? https://cloud.google.com/iap/docs/enabling-kubernetes-howto
Authentication code in my server.js https://cloud.google.com/nodejs/getting-started/authenticate-users#cloud-identity-aware-proxy
In order for Cloud IAP to be working with Kubernetes, you will need a group of one or more GKE instances, served by an HTTPS load balancer. The load balancer should be created automatically when you create an Ingress object in a GKE cluster.
Also required for enabling Cloud IAP in GKE: a domain name registered to the address of your load balancer and an App code to verify that all requests have an identity.
Once these requirements have been achieved, you can move forward with enabling Cloud IAP on Kubernetes Engine. This includes the steps to set up Cloud IAP access and creating OAuth credentials.
You will need to create a Kubernetes Secret to configure BackendConfig for Cloud IAP. The BackendConfig uses a Kubernetes Secret to wrap the OAuth client that you create when enabling the Cloud IAP.
You need to add an Ingress and enable IAP on the backend service
Create a BackendConfig object:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: app-bc
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: oauth-client-secret
attached it to the service:
apiVersion: v1
kind: Service
metadata:
name: app-service
annotations:
cloud.google.com/backend-config: '{"default": "app-bc"}'
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: nginx
and then create the ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
tls:
- secretName: ingress-tls-secret
rules:
- host: ""
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: app-service
port:
number: 80
You can find the full tutorial here hodo.dev/posts/post-26-gcp-iap/

Unable to access Kibana after successful authentication with IBM Cloud AppID

I have a k8s deployment of Kibana in IBM Cloud. It is exposed through ClusterIP k8s service, a k8s Ingress and it is accessible for a single Cloud Directory user authenticated through IBM Cloud App ID.
Kubernetes correctly re-directs to App ID login screen. The issue is that Kibana deployment is not accessible after successful AppID authentication. I get 301 Moved Permanently in a loop.
The same k8s deployment as above is exposed through k8s NodePort and works fine.
The same setup as above works correctly for a simple hello-world app with authentication.
I followed this tutorial.
In App ID Authentication Settings, the redirect URL is:
https://our-domain/app/kibana/appid_callback
Here are portions of the k8s definitions, which are relevant:
---
kind: Service
apiVersion: v1
metadata:
name: kibana-sec
namespace: default
labels:
app: kibana-sec
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 5601
selector:
app: kibana-sec
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/redirect-to-https: "True"
ingress.bluemix.net/appid-auth: "bindSecret=<our-bindSecret> namespace=default requestType=web serviceName=kibana-sec"
...
spec:
rules:
- host: <our-domain>
http:
paths:
...
- backend:
serviceName: kibana-sec
servicePort: 8080
path: /app/kibana/
tls:
- hosts:
- <our-domain>
secretName: <our-secretName>
status:
loadBalancer:
ingress:
- ip: <IPs>
- ip: <IPs>
There is no "ingress.bluemix.net/rewrite-path" annotation for our service.

Spring boot micro-service is not communicating to the Redis

I Have deployed few microservices on Kubernetes cluster. Now the problem is these microservices are not communicating to Redis server regardless it is deployed in same Kubernetes cluster or hosted outside. but at the same time if I start the microservice outside of cluster it is able to access both kinds of Redis hosted remotely or deployed in the k8s cluster. I am not getting what I am doing wrong, here are my deployment and service yaml files.
Deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
Service yaml:
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
ports:
-
nodePort: 30011
port: 80
targetPort: 6379
selector:
app: redis
role: master
tier: backend
type: NodePort
I Have followed multiple blogs also tried to get help from other sources but didn't work. and also forgot to mention, I am able to access Redis deployed in k8s cluster from the remote machines using Redis desktop manager.
#
k8s cluster is running on-prem in Ubuntu 16.04
#
Microservices when trying from outside k8s running on Windows machine can access redis on k8s cluster and outside of k8s with the same code.
When microservices are trying to communicate with the Redis from inside the cluster getting the following log
2019-03-11 10:28:50,650 [main] INFO org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer - Tomcat started on port(s): 8082 (http)
2019-03-11 10:29:02,382 [http-nio-8082-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - FrameworkServlet 'dispatcherServlet': initialization started
2019-03-11 10:29:02,396 [http-nio-8082-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - FrameworkServlet 'dispatcherServlet': initialization completed in 14 ms
Print : JedisConnectionException - redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Print : JedisConnectionException - redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Print : JedisConnectionException - redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool

Deploy the same deployment several times in Kubernetes

I intend to deploy my online service that depends on Redis servers in Kubernetes. So far I have:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "redis"
spec:
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
I can also expose redis as a service with:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
protocol: TCP
selector:
app: redis
With this, I can run a single pod with a Redis server and expose it.
However, my application needs to contact several Redis servers (this can be configured but ideally should not change live). It does care which Redis server it talks to so I can't just use replicas and expose it a single service as the service abstraction would not allow me to know which instance I am talking too. I understand that the need for my application to know this hinders scalability and am happy to loose some flexibility as a result.
I thought of deploying the same deployment several times and declaring a service for each, but I could not find anything to do that in the Kubernetes documentation. I could of course copy-paste my deployment and service YAML files and adding a suffix for each but this seems silly and way too manual (and frankly too complex).
Is there anything in Kubernetes to help me achieve my goal ?
You should look into pet sets. A pet set will get a unique, but determinable, name per instance like redis0, redis1, redis2.
They are also treated as pets opposed to deployment pods, which are treated as cattle.
Be advised that pet sets are harder to upgrade, delete and handle in general, but thats the price of getting the reliability and determinability.
Your deployment as pet set:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 3
template:
metadata:
labels:
app: redis
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 0
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
Also consider to use volumes to make it easier to access data in the pods.