I installed drone 0.4 version to aws and integrated with my private bitbucket repositories. Everyting is working as it should be. There is my .drone.yml file
build:
image: phpunit/phpunit
cache:
- vendor
commands:
- echo Building Started
- composer install
- phpunit
My commits are successfully build with unit tests, but my badges always looks like "build|none". Do I have to add anything else for that.
Thanks for help
Related
I need to continually build a create-react-app application and deploy it to Amazon S3 bucket.
I have written the following CircleCi config.yml:
version: 2
jobs:
build:
docker:
- image: circleci/node:7.10
steps:
- checkout
- run: npm install
- run: npm run build
deployment:
prod:
branch: circle-config-test
commands:
- aws s3 sync build/ s3://http://www.typing-coacher.net.s3-website.eu-central-1.amazonaws.com/ --delete
What I think should happens:
I have a docker container, I install the application, build it and the files are resting ready in build folder.
I am running the command listed in CircleCi docs and the build files are moving from the docker machine to s3 bucket.
To deploy a project to S3, you can use the following command in the
deployment section of circle.yml:
aws s3 sync <path-to-files> s3://<bucket-URL> --delete
What actually happens:
Application is being install and build files are being created, but nothing happen with deployment. it doesn't even appear on the builds console.
What Am i missing?
disclaimer: CircleCI Developer Advocate
Everything from the deployment: line and down shouldn't be there. That's syntax for CircleCI 1.0 while the rest of your config file is CircleCI 2.0.
You can either:
Create a new step and check for the branch name with Bash. If it's circle-config-test, then run the deployment commands. You'll also need to install the AWS CLI in that build.
Using [CircleCI Workflows], create a deployment job with a branch filter for circle-config-test. You can use any image that contains the AWS CLI or install it yourself. The CI Builds: AWS Docker image contains this for you.
I'm new to GitLab CI. Constructed very simple YAML just for test purposes. I configured runner with shell executor on my AWS machine and register it properly. In Settings/Pipelines I see activated runner. When I push something on my repository following YAML should be executed: docker-auto-scale
before_script:
- npm install
cache:
paths:
- node_modules/
publish:
stage: deploy
script:
- node app.js
Instead completly another runner is continouosly started (whatever I change - even when I turn off runner on my machine). It is runner with ID: Runner: #40786. In logs I can read:
Running with gitlab-ci-multi-runner 9.5.0 (413da38)
on docker-auto-scale (e11ae361)
Using Docker executor with image ruby:2.1 ...
I didn't even have Docker executor - I chose shell one. What is going on? Please support.
When you registered the new runner, did you give it a tag?
if so, and it would be e.g. my_tag modify your yaml file and append:
publish:
stage: deploy
script:
- node app.js
tags:
- my_tag
otherwise the build will be picked up by a shared runner.
I am working on a project to build, test and deploy an application to the cloud using a .gitlab-ci.yml
1) Build the backend and frontend using pip install and npm install
build_backend:
image: python
stage: build
script:
- pip install requirements.txt
artifacts:
paths:
- backend/
build_frontend:
image: node
stage: build
script:
- npm install
- npm run build
artifacts:
paths:
- frontend
2) Run unit and functional tests using PyUnit and Python Selenium
test_unit:
image: python
stage: test
script:
- python -m unittest discover
test_functional:
image: python
stage: test
services:
- selenium/standalone-chrome
script:
- python tests/example.py http://selenium__standalone-chrome:4444/wd/hub https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
3) Deploy to Google Cloud using the sdk
deploy:
image: google/cloud-sdk
stage: deploy
environment:
name: $CI_BUILD_REF_NAME
url: https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
script:
- echo $GAE_KEY > /tmp/gae_key.json
- gcloud config set project $GAE_PROJECT
- gcloud auth activate-service-account --key-file /tmp/gae_key.json
- gcloud --quiet app deploy --version $CI_BUILD_REF_SLUG --no-promote
after_script:
- rm /tmp/gae_key.json
This all runs perfectly, except for the selenium tests are run on the deployed url not the current build:
python tests/example.py http://selenium__standalone-chrome:4444/wd/hub https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
I need to have gitlab run three things simultaneously:
a) Selenium
b) Python server with the application
- Test script
Possible approaches to run the python server:
Run within the same terminal commands as the test script somehow
Docker in Docker
Service
Any advice, or answers would be greatly appreciated!
I wrote a blog post on how I set up web tests for a php application. Ok PHP, but I guess something similar can be done for a python project.
What I did, was starting a php development server from within the container that runs the web tests. Because of the artifacts, the development server can access the php files. I figure out the IP address of the container, and using this IP address the selenium/standalone-chrome container can connect back to the development server.
I created a simple demo-project, you can check out the .gitlab-ci.yml file. Note that I pinned the selenium container to an old version; this was because of an issue with an old version of the php webdriver package, today this isn't needed anymore.
I tried to run XCUITest (Objective-C/swift) on travis ci for a react-native project while there's also node jest unit tests that I'll be running. I was wondering what the best way is to set up the travis.yml file since XCUITest is in Objective-C and jest unit tests are in node_js. I've done some research but not sure what a good way to do it is.
It turns out the travis.yml file can be set up this way
language: objective-c
git:
submodules: false
sudo: required
services:
- docker
node_js:
- "5.10.1"
before_install:
- npm install
env:
- export NODE_VERSION="5.10.1"
script:
- npm test
- cd ios/ && xcodebuild test ...
hopefully it's useful to some people
Does the build have to run on the drone.io server? Can I run the build locally? Since developers need to pass the build first before pushing code to github, I am looking for a way to run the build on developer local machine. Below is my .drone.yml file:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
- npm run eslint
integration:
image: mongo-test
commands:
- mvn test
It includes two docker containers. How to run the build against this file in drone? I looked at the drone cli but it doesn't work in my expected way.
#BradRydzewski comment is the right answer.
To run builds locally you use drone exec. You can check the docs.
Extending on his answer, you must execute the command in the root of your local repo, exactly where your .drone.yml file is. If your build relies on secrets, you need to feed these secrets through the command line using the --secret or --secrets-file option.
When running a local build, there is no cloning step. Drone will use your local git workspace and mount it in the step containers. So, if you checkout some other commit/branch/whatever during the execution of the local build, you will mess things up because Drone will see those changes. So don't update you local repo while the build is running.