Can't find path to the project on virtual machine running workflows - virtual-machine

job execution on the VM .
When the job begins, GitHub automatically provisions a new VM for that job. All steps in the job execute on the VM, allowing the steps in that job to share information using the runner's filesystem. You can run workflows directly on the VM or in a Docker container. When the job has finished, the VM is automatically decommissioned.
I found this from https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners
I need to know the exact filepath to my project on github virtual machine.
For example:
name: Node Continuous Integration
on:
pull_request:
branches: [ master ]
jobs:
test_pull_request:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v1
with:
node-version: 12
- run: npm ci
- run: npm test
- run: npm run build
I know that my project will be copied to ubuntu virtual machine but where exactly will it be located?

Since you're using the checkout action, your repository files will be located under the $GITHUB_WORKSPACE directory.
$GITHUB_WORKSPACE - a default environment variable that means the default working directory on the runner for steps, and the default location of your repository when using the checkout action. For example, /home/runner/work/my-repo-name/my-repo-name.
For more details, see:
actions/checkout docs
Default environment variables docs

Related

pipeline passed without any changes gitlab ci cd

im new comer in gitlab ci cd
this is my .gitlab-ci.yml in my project and i want to test it how it works . i have registered gitlab runner that is ok but my problem is when i add a file to my project it will run pipelines and they are successfully passed but nothing changes in my project in server?
what is the problem? it is not dockerized project
image: php:7.2
stages:
- preparation
- building
cache:
key: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
composer:
stage: preparation
script:
- mkdir test5
when: manual
pull:
stage: building
script:
- mkdir test6
CI/CD Pipelines like GitLab CI/CD are nothing different than virtual environments, usually docker, that can operate on the basis of your code as you do on your own host system.
Your mkdir operations definitely have an effect but the changes remain inside the virtual environment because they are not reflected to your remote repository. For this to work, you have to setup your repository from inside the CI/CD runner and commit to your repository, (again) just like you do from your own host system. To execute custom commands, GitLab CI/CD has the script parameter. I am sure, reading this will get you up and running.

Continuous Deployment not working

I need to continually build a create-react-app application and deploy it to Amazon S3 bucket.
I have written the following CircleCi config.yml:
version: 2
jobs:
build:
docker:
- image: circleci/node:7.10
steps:
- checkout
- run: npm install
- run: npm run build
deployment:
prod:
branch: circle-config-test
commands:
- aws s3 sync build/ s3://http://www.typing-coacher.net.s3-website.eu-central-1.amazonaws.com/ --delete
What I think should happens:
I have a docker container, I install the application, build it and the files are resting ready in build folder.
I am running the command listed in CircleCi docs and the build files are moving from the docker machine to s3 bucket.
To deploy a project to S3, you can use the following command in the
deployment section of circle.yml:
aws s3 sync <path-to-files> s3://<bucket-URL> --delete
What actually happens:
Application is being install and build files are being created, but nothing happen with deployment. it doesn't even appear on the builds console.
What Am i missing?
disclaimer: CircleCI Developer Advocate
Everything from the deployment: line and down shouldn't be there. That's syntax for CircleCI 1.0 while the rest of your config file is CircleCI 2.0.
You can either:
Create a new step and check for the branch name with Bash. If it's circle-config-test, then run the deployment commands. You'll also need to install the AWS CLI in that build.
Using [CircleCI Workflows], create a deployment job with a branch filter for circle-config-test. You can use any image that contains the AWS CLI or install it yourself. The CI Builds: AWS Docker image contains this for you.

Git push executes unwanter Gitlab runner

I'm new to GitLab CI. Constructed very simple YAML just for test purposes. I configured runner with shell executor on my AWS machine and register it properly. In Settings/Pipelines I see activated runner. When I push something on my repository following YAML should be executed: docker-auto-scale
before_script:
- npm install
cache:
paths:
- node_modules/
publish:
stage: deploy
script:
- node app.js
Instead completly another runner is continouosly started (whatever I change - even when I turn off runner on my machine). It is runner with ID: Runner: #40786. In logs I can read:
Running with gitlab-ci-multi-runner 9.5.0 (413da38)
on docker-auto-scale (e11ae361)
Using Docker executor with image ruby:2.1 ...
I didn't even have Docker executor - I chose shell one. What is going on? Please support.
When you registered the new runner, did you give it a tag?
if so, and it would be e.g. my_tag modify your yaml file and append:
publish:
stage: deploy
script:
- node app.js
tags:
- my_tag
otherwise the build will be picked up by a shared runner.

How to run build in local machine with drone.io

Does the build have to run on the drone.io server? Can I run the build locally? Since developers need to pass the build first before pushing code to github, I am looking for a way to run the build on developer local machine. Below is my .drone.yml file:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
- npm run eslint
integration:
image: mongo-test
commands:
- mvn test
It includes two docker containers. How to run the build against this file in drone? I looked at the drone cli but it doesn't work in my expected way.
#BradRydzewski comment is the right answer.
To run builds locally you use drone exec. You can check the docs.
Extending on his answer, you must execute the command in the root of your local repo, exactly where your .drone.yml file is. If your build relies on secrets, you need to feed these secrets through the command line using the --secret or --secrets-file option.
When running a local build, there is no cloning step. Drone will use your local git workspace and mount it in the step containers. So, if you checkout some other commit/branch/whatever during the execution of the local build, you will mess things up because Drone will see those changes. So don't update you local repo while the build is running.

How to run integration test inside a docker container in drone pipeline

I have a docker image built up for mongodb test. You can be found from zhaoyi0113/mongo-uat. When start a docker container from this image, it will create a few mongodb instances which will take a few minutes to startup. Now I want to run my integration test cases inside this container by drone CI. Below is my .drone.yml file:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
- npm run eslint
integration:
image: zhaoyi0113/mongo-uat
commands:
- npm install
- npm run integration
There are two steps in this pipeline, the first is to run unit test in a nodejs project. The second one integration is used to run integration test cases in the mongodb docker image.
when I run drone exec it will get an error failed to connect to mongo instance. I think that because the mongodb instance needs a few minutes to startup. The commands npm install and npm run integration should be run after the mongodb instance launched. How can I delay the build commands?
EDIT1
The image zhaoyi0113/mongo-uat has mongodb environment. It will create a few mongodb instances. I can run this command docker run -d zhaoyi0113/mongo-uat to launch this container after that I can attach to this container to see the mongodb instances. I am not sure how drone launch the docker container.
The recommended approach to integration testing is to place your service containers in the service section of the Yaml [1][2]
Therefore in order to start a Mongo service container I would create the below Yaml file. The Mongo service will start on the default port at 127.0.0.1 and be accessible from your pipeline containers.
pipeline:
test:
image: node
commands:
- npm install
- npm run test
integration:
image: node
commands:
- npm run integration
services:
mongo:
image: mongo:3.0
This is the recommended approach for testing services like MySQL, Postgres, Mongo and more.
[1] http://readme.drone.io/usage/getting-started/#services
[2] http://readme.drone.io/usage/services-guide/
As a short addendum to Brads answer: While the mongo service will run on 127.0.0.1 on the drone host machine - it will not be possible to reach the service from this IP within the node app. To access the service you would reference its service name (here: mongo).