Support settings 2 languages in one travis ci file (objective-c and node for react-native) - objective-c

I tried to run XCUITest (Objective-C/swift) on travis ci for a react-native project while there's also node jest unit tests that I'll be running. I was wondering what the best way is to set up the travis.yml file since XCUITest is in Objective-C and jest unit tests are in node_js. I've done some research but not sure what a good way to do it is.

It turns out the travis.yml file can be set up this way
language: objective-c
git:
submodules: false
sudo: required
services:
- docker
node_js:
- "5.10.1"
before_install:
- npm install
env:
- export NODE_VERSION="5.10.1"
script:
- npm test
- cd ios/ && xcodebuild test ...
hopefully it's useful to some people

Related

Gitlab CI/CD test whether pm2 start gives errors

So I am trying to make a CI/CD pipeline for the first time as I have no experience with DevOps at all, and I want to add pm2 to it, so I can test whether it is able to successfully start the API without issues. This way, I can ensure that the API works when being deployed.
The one issue I have is if I do pm2 start ecosystem.config.js --env development, how will the pipeline know whether this startup is successful or not?
This is currently the YAML file that I have:
image: node:10.19.0
services:
- mongo:3.6.8
- pm2:3.5.1
cache:
paths:
- node_modules/
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
test:
stage: test
script:
- npm run lint
- pm2 start ecosystem.config.js --env development
Any ideas how you can test pm2 startup?
Thanks in advance!
UPDATE #1:
The reason why I want to test pm2 startup is that pm2 has several "process.env" variables that are required in my API to be started.
Starting the API without those variables immediately gives errors, as it needs the variables to initialize the connection to other APIs.
So I don't know whether there is a workaround to be able to start the API without an issue?

How to run the ci on windows, linux and macos?

I'm trying to package a Python and Qt app with pyinstaller. I, however, haven't found a way to run multiple runners in parallel.
You basically would duplicate the job definition and just assign different tags to pick the runners you want. Here's how I build an Electron app on three different runners in parallel:
.build:
stage: build
script:
- npm install --progress=false
- npm run electron:build
build-linux:
extends: .build
tags:
- linux
build-mac:
extends: .build
tags:
- mac
build-windows:
extends: .build
tags:
- windows
This config makes use of hidden jobs and extends.
Since GitLab 14.1 it's possible to use the parallel: matrix: keyword in combination with tags, e.g.
build:
script:
- npm install --progress=false
- npm run electron:build
parallel:
matrix:
- PLATFORM: [linux, macos, windows]
tags:
- ${PLATFORM}
The tags: keyword should then pick up the CI runner for the specified operating system.

Vue: How to find out dependencies needed for a build engine on a CI platform?

I have a Vue app built that requires me to build the assets from my machine each time updates are made. Another developer asked me let them know the dependencies so that they set up a build engine on Circle CI. Does that mean the dependencies and devDependencies listed in package.json? Some of those I don't remember manually installing.
It's hard to know what your colleague is asking for without talking to them directly, but for cloud-based continuous integration systems, you usually need to know what the system prerequisites are in order to build. The stuff that's in package.json is the easy bit, as long as you have a "build" command in your package.json "scripts" section.
As an example, I have a package.json that looks roughly like this:
"build": "yarn build:umd & yarn build:es & yarn build:unpkg",
"build:umd": "rollup --config build/rollup.config.js --format umd --file dist/honeybadger-vue.umd.js",
"build:es": "rollup --config build/rollup.config.js --format es --file dist/honeybadger-vue.esm.js",
"build:unpkg": "rollup --config build/rollup.config.js --format iife --file dist/honeybadger-vue.js",
"build:unpkg-minify": "rollup MINIFY=true --config build/rollup.config.js --format iife --file dist/honeybadger-vue.min.js",
However, for the continuous integration setup, I need to tell the CI system what I need in order to run those commands. Those are likely the dependencies your colleague is asking about.
For example, I use Travis rather than Circle CI, but I need to specify which versions of Node I need to run tests on, which external dependencies I might need in order to build the library and to run tests. That could be libraries like ImageMagick, headless Chrome, maybe a database client for some use cases. I also need to know what commands need to be run to run the build (travis makes a reasonable assumption once you tell it that the language is node_js; I would expect Circle CI to be similar).
In my particular Travis setup, I have a config file in the project called .travis.yml that tells Travis everything it needs to know, like this:
dist: trusty
language: node_js
node_js:
- 8
- 10
- 11
sudo: false
addons:
chrome: stable
before_script:
- "export DISPLAY=:99.0"
- "sh -e /etc/init.d/xvfb start"
- sleep 3 # give xvfb some time to start
before_install:
- google-chrome-stable --headless --disable-gpu --remote-debugging-port=9222 http://localhost &
This lists the versions of Node I want to test with, a list of pre-build commands, and some addons I need. I could just as easily add things with the OS package manager if I needed to.
Basically, I presume your developer counterpart is looking for enough information to make sure it's possible to build the library on someone else's machine. That's almost certainly what they mean by "dependencies", as your package file will contain sufficient information to reference any of the dependencies that Node can handle itself.

Drone.io Php Build Badges Always None

I installed drone 0.4 version to aws and integrated with my private bitbucket repositories. Everyting is working as it should be. There is my .drone.yml file
build:
image: phpunit/phpunit
cache:
- vendor
commands:
- echo Building Started
- composer install
- phpunit
My commits are successfully build with unit tests, but my badges always looks like "build|none". Do I have to add anything else for that.
Thanks for help

How to avoid reinstalling dependencies for each job in Gitlab CI

I'm using Gitlab CI 8.0 with gitlab-ci-multi-runner 0.6.0. I have a .gitlab-ci.yml file similar to the following:
before_script:
- npm install
server_tests:
script: mocha
client_tests:
script: karma start karma.conf.js
This works but it means the dependencies are installed independently before each test job. For a large project with many dependencies this adds a considerable overhead.
In Jenkins I would use one job to install dependencies then TAR them up and create a build artefact which is then copied to downstream jobs. Would something similar work with Gitlab CI? Is there a recommended approach?
Update: I now recommend using artifacts with a short expire_in. This is superior to cache because it only has to write the artifact once per pipeline whereas the cache is updated after every job. Also the cache is per runner so if you run your jobs in parallel on multiple runners it's not guaranteed to be populated, unlike artifacts which are stored centrally.
Gitlab CI 8.2 adds runner caching which lets you reuse files between builds. However I've found this to be very slow.
Instead I've implemented my own caching system using a bit of shell scripting:
before_script:
# unique hash of required dependencies
- PACKAGE_HASH=($(md5sum package.json))
# path to cache file
- DEPS_CACHE=/tmp/dependencies_${PACKAGE_HASH}.tar.gz
# Check if cache file exists and if not, create it
- if [ -f $DEPS_CACHE ];
then
tar zxf $DEPS_CACHE;
else
npm install --quiet;
tar zcf - ./node_modules > $DEPS_CACHE;
fi
This will run before every job in your .gitlab-ci.yml and only install your dependencies if package.json has changed or the cache file is missing (e.g. first run, or file was manually deleted). Note that if you have several runners on different servers, they will each have their own cache file.
You may want to clear out the cache file on a regular basis in order to get the latest dependencies. We do this with the following cron entry:
#daily find /tmp/dependencies_* -mtime +1 -type f -delete
EDIT: This solution was recommended in 2016. In 2021, you might consider the caching docs instead.
A better approach these days is to make use of artifacts.
In the following example, the node_modules/ directory is immediately available to the lint job once the build stage has completed successfully.
build:
stage: build
script:
- npm install -q
- npm run build
artifacts:
paths:
- node_modules/
expire_in: 1 week
lint:
stage: test
script:
- npm run lint
From docs:
cache: Use for temporary storage for project dependencies. Not useful for keeping intermediate build results, like jar or apk files. Cache was designed to be used to speed up invocations of subsequent runs of a given job, by keeping things like dependencies (e.g., npm packages, Go vendor packages, etc.) so they don’t have to be re-fetched from the public internet. While the cache can be abused to pass intermediate build results between stages, there may be cases where artifacts are a better fit.
artifacts: Use for stage results that will be passed between stages. Artifacts were designed to upload some compiled/generated bits of the build, and they can be fetched by any number of concurrent Runners. They are guaranteed to be available and are there to pass data between jobs. They are also exposed to be downloaded from the UI. Artifacts can only exist in directories relative to the build directory and specifying paths which don’t comply to this rule trigger an unintuitive and illogical error message (an enhancement is discussed at https://gitlab.com/gitlab-org/gitlab-ce/issues/15530 ). Artifacts need to be uploaded to the GitLab instance (not only the GitLab runner) before the next stage job(s) can start, so you need to evaluate carefully whether your bandwidth allows you to profit from parallelization with stages and shared artifacts before investing time in changes to the setup.
So, I use cache. When don't need to update de cache (eg. build folder in a test job), I use policy: pull (see here).
I prefer use cache because removes files when pipeline finished.
Example
image: node
stages:
- install
- test
- compile
cache:
key: modules
paths:
- node_modules/
install:modules:
stage: install
cache:
key: modules
paths:
- node_modules/
after_script:
- node -v && npm -v
script:
- npm i
test:
stage: test
cache:
key: modules
paths:
- node_modules/
policy: pull
before_script:
- node -v && npm -v
script:
- npm run test
compile:
stage: compile
cache:
key: modules
paths:
- node_modules/
policy: pull
script:
- npm run build
I think it´s not recommended because all jobs of the same stage could be executed in parallel.
First all jobs of build are executed in parallel.
If all jobs of build succeeds, the test jobs are executed in parallel.
If all jobs of test succeeds, the deploy jobs are executed in parallel.
If all jobs of deploy succeeds, the commit is marked as success.
If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
I have read that here:
http://doc.gitlab.com/ci/yaml/README.html
Solved a problem with a symbolic link to a folder outside the working directory. The solution looks like this:
//.gitlab-ci.yml
before_script:
- New-Item -ItemType SymbolicLink -Path ".\node_modules" -Target "C:\GitLab-Runner\cache\node_modules"
- yarn
after_script:
- (Get-Item ".\node_modules").Delete()
I know this is a enough dirty solution but it saves a lot of time for build process and extends the storage life.
GitLab introduced caching to avoid redownloading dependencies for each job.
The following Node.js example is inspired from the caching documentation.
image: node:latest
# Cache modules in between jobs
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- .npm/
before_script:
- npm ci --cache .npm --prefer-offline
server_tests:
script: mocha
client_tests:
script: karma start karma.conf.js
Note that the example uses npm ci. This command is like npm install, but designed to be used in automated environments. You can read more about npm ci in the documentation and the command line arguments you can pass.
For further information, check Caching in GitLab CI/CD and the cache keyword reference.