Error while executing e2e test in kubernetes - testing

I am trying to run e2e test case of kubernetes but facing this issue .
../../cluster/../cluster/gce/util.sh: line 127: gcloud: command not found
I am using this command
go run hack/e2e.go -- -v --test
What should be the fix for this

Try prepending
KUBERNETES_PROVIDER=local KUBE_MASTER=local go run hack/e2e.go -- -v --test
The E2E tests are written to build up and tear down a cluster for you. The provider is used to do just that. there are providers e.g. for Gcloud and AWS. That is also the reason why you get the gcloud error. It tries to build a new cluster on GCloud and cannot find the CLI binary.
With the local provider this shouldn't happen.

Related

azure devop selfhosted agent, newman command not recognized

Trying to run my postman collection in azure devops inside a self-hosted agent. When I try to run the command inside the agent "newman run postman_collection.json -e postman_environment.json -r cli,htmlextra" it's running fine. But when I run the same through a a command line script task in release pipeline it's throwing the error "newman is not recognized..". I also tried to have a npm task for newman installation i.e. "npm install -g newman" it's also throwing the erro "##[error]Unable to locate executable file: 'newman'. Please verify either the file path exists or the file can be found within a d...."
azure devop selfhosted agent, newman command not recognized
According to the error message "##[error]Unable to locate executable file: 'newman" when you using the npm install -g newman, you could try to add C:\Users\[BUILDSERVER-USERNAME]\AppData\Roaming\npm to the PATH variable for the [BUILDSERVER-USERNAME] user.
You could refer to this document How to fix the Newman task for Team Foundation Server silently failing for some more details.
Besides, when we use command line to install the newman, it will take a few minutes to install it, so we need to wait for a few minutes before we using the command line:
"newman run postman_collection.json -e postman_environment.json -r cli,htmlextra"
You could add powershell task to sleep a few minutes:
echo "Sleeping for 10 mins..."
Start-Sleep -s 600

aws Decrypted Variables Error Message: parameter does not exist: JWT_SECRET

I am new to aws I was trying to create a pipeline. But it turns this error once it builds
[Container] 2020/05/23 04:32:56 Phase context status code: Decrypted Variables Error Message: parameter does not exist: JWT_SECRET
Even though the token was stored by running this command
s ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString
I tried to fix that by adding this line buildspec.yml post build commands. but still does not fix the problem
- kubectl set env deployment/simple-jwt-api JWT_SECRET=$JWT_SECRET
My buildspec.yml contain this added line to configure the pass of my jwt secret to the app
env:
parameter-store:
JWT_SECRET: JWT_SECRET
Check my github repos for more details about the code
Also once I run this under cmd to test the api endpoints kubectl get services simple-jwt-api -o wide I have got this error
Error from server (NotFound): services "simple-jwt-api" not found
Well it is obvious since the pipeline failed to build. Please how can I fix it?
In my case I go this error while I have created my stack in different region than the cluster. So whenever it search for the variable it does not find it. So, be carful to point to the same region in every creation action :).
The best solution I found was to add a region tag when declaring the env variables.
aws ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString --region <your-cluster-region>
I also encountered this same issue,
Changing the kubectl version in the buildspec.yml file worked for me
- curl -LO https://dl.k8s.io/release/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl
# Download the kubectl checksum file
- curl -LO "https://dl.k8s.io/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl.sha256"
Note that the <YOUR_KUBERNETES_VERSION> must be the same with what you have on your created cluster dashboard.

Selenium side runner + chromedriver tests with docker not running

I am trying to get selenium side runner to run some tests using docker, to include in our CI.
I am able to run the tests locally in my machine by running:
selenium-side-runner C:\path-to-tests\tests-selenium.side
This is windows host.
I am trying to do the same using docker locally, so afterwards I will migrate this to our Teamcity.
First I am running the selenium server container:
docker run -d -p 4444:4444 --name chromedriver selenium/standalone-chrome:3.4.0
Afterwards I run the selenium side runner container:
docker run -v C:\path-to-tests:/sides --link chromedriver:chromedriver nixel2007/docker-selenium-side-runner
I have to link the containers otherwise I get an error saying that the container can't connect to chromedriver:4444
I also have to mount the volume where my tests are.
When I do this and run, I get the following error:
Test suite failed to run
WebDriverError: Unable to parse new session response
What am I missing here?
UPDATE:
I also tried different versions of the selenium/standalone-chrome container, selenium/standalone-chrome:3.4.0, selenium/standalone-chrome:3.141.59-xenon and selenium/standalone-chrome:latest
All fail with different errors.
SECOND UPDATE:
I have been able to get the tests to run, both locally and in teamcity. One of the issues that I am facing right now is that docker-compose seems to hang. Not sure if this is container related, or docker-compose related.
When I run the tests, the selenium side runner container exits with code 1 and I do not get back to the host console prompt, it stays forever waiting for something to happen.
The error is this:
selenium_selenium-side-runner_1 exited with code 1
I have gotten the docker-compose file from here:
https://github.com/nixel2007/docker-selenium-side-runner/blob/master/docker-compose.yml
Any clues on what I might be missing?

Using services: mysql for codecption test in gitlab-ci fails with "Connection refused"

I have a CakePHP Application with codeception-plugin for testing.'
Locally I run it in a ddev docker environment and everything works fine.
Trying to run automated tests with gitlab-ci gives me following error:
Running with gitlab-runner 11.1.0 (081978aa)
on shared runner 601c0f11
Using Docker executor with image kevinliteon/cakephp:php7 ...
Starting service mysql:latest ...
Pulling docker image mysql:latest ...
Using docker image sha256:6a834f03bd02bb88cdbe0e289b9cd6056f1d42fa94792c524b4fddc474dab628 for mysql:latest ...
Waiting for services to be up and running...
*** WARNING: Service runner-601c0f11-project-94-concurrent-0-mysql-0 probably didn't start properly.
Health check error:
service "runner-601c0f11-project-94-concurrent-0-mysql-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2018-10-04T12:12:18.904025613Z Initializing database
2018-10-04T12:12:18.925096235Z 2018-10-04T12:12:18.919745Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
2018-10-04T12:12:18.925195518Z 2018-10-04T12:12:18.919970Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.12) initializing of server in progress as process 30
2018-10-04T12:12:50.330736417Z 2018-10-04T12:12:50.330487Z 5 [Warning] [MY-010453] [Server] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
*********
Pulling docker image kevinliteon/cakephp:php7 ...
Using docker image sha256:bd4a83b02647ad93a356b343d2ce5ae3a9a1177aea2cd76c61b009abc7df8990 for kevinliteon/cakephp:php7 ...
Running on runner-601c0f11-project-94-concurrent-0 via d7f4a5e71b47...
Fetching changes...
Removing vendor/
HEAD is now at 92cb022 test
Checking out 92cb0223 as deployment...
Skipping Git submodules setup
Checking cache for default...
Successfully extracted cache
$ vendor/bin/codecept run Unit
Codeception PHP Testing Framework v2.3.9
Powered by PHPUnit 6.5.13 by Sebastian Bergmann and contributors.
In Db.php line 308:
Db: SQLSTATE[HY000] [2002] Connection refused while creating PDO connection
My gitlab-ci.yml (partly):
services:
- mysql:latest
variables:
MYSQL_ROOT_PASSWORD: mysql123456789
MYSQL_DATABASE: test_db
MYSQL_USER: db
MYSQL_PASSWORD: db
build:
...
codecept:Unit:
stage: test
script:
- vendor/bin/codecept run Unit
In my codeception.yml I configured the Db module:
modules:
config:
Db:
dsn: 'mysql:host=mysql;dbname=test_db'
user: 'db'
password: 'db'
cleanup: true # reload dump between tests
populate: true # load dump before all tests
reconnect: true
I also tryed using the root user - without success.
Problem is, that I can not connect to the DB for whatever reasons... Maybe the warnings while initializing the service container have something to do with that, but I could not figure out how to fix them or if this is the problem.
I really tried a lot of things without any success! Basically my code depends on the documentations of gitlab-ci and codeception so it should work.
Anybody implemented this scenario successfully or know what I'm doing wrong?
Thanks for any help!
I want to answer how I solved it:
First thing was, I had to add the env-varibale "db_dsn" like this:
export db_dsn="mysql://user:paswd#host/db"
Then I still got the health-check error. Only way I found to successfully setup was to use another docker image for the db-service. I choose "mariadb:latest" - and then it worked for me.

Deploying Symfony 4 Application to AWS Elasticbeanstalk

I have a working Symfony 4.0.1 application running on PHP 7.1.14 (locally) that I would like to deploy to AWS Elastic Beanstalk using the EB CLI
I have a dist package of the application on my master git branch configured for production (vendor folder removed etc) that I am able to successfully deploy to Heroku. Now I need to deploy to AWS EB.
The AWS EB environment has already been set up (although I dont have access to the console). Some environment details are as follows:
Platform: arn:aws:elasticbeanstalk:us-east-2::platform/Tomcat 8 with Java 8 running on 64bit Amazon Linux/2.7.7
Tier: WebServer-Standard-1.0
At first, I was able to successfully deploy the application, but accessing the URL gave a 404 error for every page.
I did some googling and found a few articles describing the use of .config files. I have added one named 03_main.config with the following contents.
commands:
300-composer-update:
command: "export COMPOSER_HOME=/root && composer.phar self-update -n"
container_commands:
300-run-composer:
command: "composer.phar install --no-dev --optimize-autoloader --prefer-dist --no-interaction"
600-update-cache:
command: "source .ebextensions/bin/update-cache.sh"
700-remove-dev-app:
command: "rm web/app_dev.php"
Deploying with this .config file gives the following deployment failure error:
ERROR: [Instance: i-0c5f61f41d55a18bc] Command failed on instance. Return code: 127 Output: /bin/sh: composer.phar: command not found. command 300-composer-update in .ebextensions/03-main.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I understand the purpose of .config files but do not understand what additional configuration is needed for get this Symfony app running.
I guess you should use the full path to composer like bellow :
100-update-composer:
command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update -n