OpenTest: How to integrate server, actor and template in AWS CodeBuild for linux? - aws-codebuild

We have to integrate our OpenTest automation with AWS CodeBuild. How can we run server, actor and template session in one terminal in linux ?

You can trigger a new test session by using the OpenTest CLI:
opentest session create --template <TEMPLATE_NAME> --wait --server <SERVER_IP>:<PORT>
For example:
opentest session create --template "My session template" --wait --server 1.2.3.4:3000
Of course, you have to make sure that:
OpenTest is installed on the machine where AWS CodeBuild agent is running from and the opentest command in the system path - basically make sure the opentest command is available for your pipeline.
The AWS CodeBuild agent machine has network access to the IP and port that the OpenTest server is running on.

Related

Jenkins plugin to generate kube config for kubectl after changes to kubectl authentication for GKE

Since for kubectl to access gke, now gke-gcloud-auth-plugin also needs to be installed.
I am using jenkins to deploy the changes to gke using the kubectl plugin but now after this change, not able to use the same plugin.
Can anyone suggest any jenkins plugin that can help to access gke after this change is rolled out in kubectl.
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
You could use google/cloud-sdk image, which has gke-gcloud-auth-plugin already pre-installed. But now before gcloud container clusters get-credentials you should run:
export USE_GKE_GCLOUD_AUTH_PLUGIN=True

Can't find k8s context after install Gitlab Agent

Refering to https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html
I installed Gitlab Agent by using helm,and the kubernetes clusters is connected,but
there is only one context when I execute
kubectl config get-contexts
* 270285107419715523-c34f80xxxxx647c08c49f1e550887388 kubernetes 27xxxx419715523
besides the Gitlab CI/CD failed

deploy static code to apache httpd server automatically

I have a static website that I would like to deploy to my Apache httpd server using Jenkins or any other methods.
my code base available in GitHub.
My files in server has to be updated in path: /var/www/html
you could set Jenkins to Poll your Repository in Github. You will need to set up the integrations between Github and Jenkins so Jenkins can poll your Repo for changes. Then you can use a Copy tool of your choice, like Rsync or SCP, and run them in the Jenkins Script to copy the file to your server.
Jenkins Polling and SCM Management - https://www.softwaretestinghelp.com/jenkins-job-tutorial/

How can I ssh into a container running inside an OpenShift/Kubernetes cluster?

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

How do I connect to a docker container running Apache Drill remotely

On Machine A, I run
$ docker run -i --name drill-1.14.0 -p 8047:8047
--detach -t drill/apache-drill:1.14.0 /bin/bash
<displays container ID>
$ docker exec -it drill-1.14.0 bash
<connects to container>
$ /opt/drill/bin/drill-localhost
My question is, how do I, from Machine B run
docker exec -it drill-1.14.0 bash
on Machine A - I've looked trough the help pages, but nothing is clicking.
Both machines are Windows (10 x64) machines.
You need to ssh or otherwise securely connect from machine B to machine A, and then run the relevant Docker command there. There isn't a safe shortcut around this.
Remember that being able to run any Docker command at all implies root-level access over the system (you can docker run -u root -v /:/host ... and see or change any host-system files you want). Usually there's some control over who exactly can run Docker commands because of this. It's possible to open up a networked Docker socket, but extremely dangerous: now anyone who can reach that socket over the network can, say, change the host's password and sudoers files to allow a passwordless root-equivalent ssh login. (Google News brought me an article a week or two ago about attackers looking for open Docker network sockets and using them to turn machines into cryptocurrency miners, for instance.)
If you're building a service, and you expect users to interact with it remotely, then you probably need to make whatever interfaces available as network requests and not by running local shell commands. For instance, it's common for HTTP-based services to have a /admin set of URL paths that require a separate password authentication or otherwise different privileges.
If you're trying to administer a service via its local config files, often the best path is to store the config files on the host system, use docker run -v to inject them into the container, and when you need to change them, docker stop; docker rm; docker run the container to get a new copy of it with a new config file.
If you're packaging some application, but the primary way to interact with it is via CLI tools and local files, consider whether you actually want to use a tool that isolates the application's filesystem from the host's and requires root-level access to interact with it at all. The tooling for installing semi-isolated tools in your choice of scripting language is pretty mature, and for compiled languages quite well-established; there's nothing wrong with installing software on your host system.