Building workflow with automated builds - testing

I have a question about workflow with docker and gitlab-ci or automated builds in general.
This is how I am imagine how a build should look like↓.
How to do it with gitlab-ci ?
I know how to do one of this tasks, but I don't know how to.
In my imagination i would need more than one base image.
Maybe I am missunderstanding the hole thing.
How should this process be done in general ?
Thx four your help 😀

Since your question is very general, I will answer it with an example.
Consider a imaginary C++ project, which contains the code, a Makefile which creates the executable "app" and this Dockerfile:
FROM ubuntu:16.04
ADD ./app /app
CMD ["/app"]
To build the application and the docker image as you said, you could use a GitLab CI config like this:
stages:
- test
- build
- docker
test:
stage: test
script:
- make test
build:
stage: build
script:
- make
artifacts:
paths:
- ./app
docker:
stage: docker
dependencies:
- build
script:
- docker build -t your-repo/image-name:latest .
- docker push your-repo/image-name:latest
Explanation
This CI file creates three jobs: "test", "build" and "docker". "test" runs "make test" to execute any imaginary tests our codebase might have. If they suceed, the GitLab runner will execute the next job, "build".
"build" builds the application by calling "make". We expect make to create a file "app" in the current directory, which is our compiled application that will run in the container. The section "artifacts" states that we want to keep this resulting file, since we need it for the next job.
The next job "docker" has a section "dependencies"; in this section we state that this job depends on the output of the job called "build", which created our file "app" before. Then we first build the docker image using docker build and push it as usual.
As said before, these are just examples, and especially the script sections will greatly differ based on your projects and your runner config. See the official CI documentation for all possibilities.

Related

Buildsteps after each other

How do I run several buildsteps after each other in IntelliJ? I think I want a mini CI/CD build system inside the editor.
For example, the project I work on now is a Spring boot and javascript web site. I need to build it with maven with mvn clean package -Pdockerimage. This copies files for building the Docker image to target/dockerimgbuild.
Then I want to build the docker image using docker build -t scheduling-ui-dev . and after that run it with docker compose docker-compose up --build from src/main/resources/docker-compose.
I have built one run configuration for each of these steps but how do I run them after each other? I have found that you can have before launch but the system is clunky and complains if target/dockerimgbuild doesn't exists even before it have run the maven step which creates it. Latest problem I stumbled on was that a file prevented maven from removing target/dockerimgbuild and all run steps was automatically removed from the run configurations.
There is a run configuration called compound but that runs everything in parallell and you can not specify order which is a problem.
I wonder if it is feasible to start TeamCity in a container, do anyone have a clue about that (is teamcity easy to configure, how to make it launch a docker-compose container on my host machine etc)?
My solution right now is to have several terminals (if this gets more permanent I will replace it with a script) where I just press up and enter to execute the steps manually. Seems stupid as I guess maven itself can do all of this...but I don't know how or how much work it is.
There is a compound Run/Debug configuration: https://www.jetbrains.com/help/idea/run-debug-configuration-compound-run-configuration.html
Also, there is a multi-run plugin: https://plugins.jetbrains.com/plugin/7248-multirun

Deploying Vue.js App using azure devops release pipeline

I have a vue.js application that is creating and building using vue-cli 3. I have some environment variables in .env.test and .env.prod files.
To build the app I'm using a azure devops build pipeline where I run the command:
npm run build:test or npm run build:prod
That generates different artifacts that are input for Stage in azure devops release pipeline.
The problem I'm facing is I don't want to have separate builds for every environment. I want to build one and deploy to different environments is that possible?
How do I handle those variables to build once package for all environments? Is it a good practice? Or should I have different pipelines for different environments as I have right now?
From perspective of CI
There should be only single build pipeline that will build artifact regardless of the environment where it will run.
.env.prod might be used to deploy artifacts to any environments (Development, Production, etc.)
You have to provide configuration with tokens, which will be replaced on Deployment/Release stage:
env_key1=#{token_key1}#
env_key2=#{token_key2}#
env_key3=#{token_key3}#
Therefore, just build project and publish artifact using single configuration file for all environments.
From perspective of CD
I would recommend to use single release pipeline with multiple stages
(Development, Production, etc).
Provide separate variables groups based on stages. It allows to keep variables separate, logically grouped and use Azure Key Vault as source of secrets. Variable names must be equal to environment tokens (without prefix and suffix).
Add any Task you wish into Stage, which will find and replace tokens.
Currently, I use Replace Tokens extension from marketplace. Depend on stage, different group of variables will be substituted. Replace Tokens task does all of the job automatically, e.i. scans js files and replaces tokens. Default token prefix and suffix are: #{ }#, but task allow to provide custom you wish.
So we had a similar problem. We are about to update our solution to work with a variable group, but if you want a way to do it without one you can always do something like this:
- script: |
npm install
npm run test:unit
if [ $? -ne 0 ]; then
exit 1
fi
npm run build-prod
condition: and(succeeded(), not(in(variables['Build.Reason'], 'PullRequest', 'Manual')))
displayName: 'npm install, test and build for prod'
- script: |
npm install
npm run test:unit
if [ $? -ne 0 ]; then
exit 1
fi
npm run build
condition: and(succeeded(), in(variables['Build.Reason'], 'PullRequest', 'Manual'))
displayName: 'npm install, test and build for test'
So quick breakdown on the scripts. If the build was part of a PullRequet or manual we wanted a staging build which used the default build script. Otherwise we assumed the build was meant for production (You will want some branch policies to enforce this). Then the release pipe-line checked for the a build tag which we set with the following:
- task: PowerShell#2
condition: and(succeeded(), not(in(variables['Build.Reason'], 'PullRequest', 'Manual')))
inputs:
targetType: 'inline'
script: 'Write-Host "##vso[build.addbuildtag]release"'
- task: PowerShell#2
condition: and(succeeded(), in(variables['Build.Reason'], 'PullRequest', 'Manual'))
inputs:
targetType: 'inline'
script: 'Write-Host "##vso[build.addbuildtag]test"'
Now, like I said we are moving away from this, but it did work pretty well and it allowed us to have one build that would deploy with the correct settings without needing to do anything too fancy.
If you use something like this the last step is filter the builds when they get to the release pipeline based on the build tag and branch.

Setup azure-pipelines.yml "Directory '/home/vsts/work/1/a' is empty." with ASP.NET Core

I seriously need help to create my yml build file because I cannot find any good tutorial, sample or other king of help anywhere. I always get similar error: See the warning, it seems my build artifact is always empty. All step are succes but I cannot deploy because my files are not found. Stupid.
##[section]Starting: PublishBuildArtifacts
==============================================================================
Task : Publish Build Artifacts
Description : Publish build artifacts to Azure Pipelines/TFS or a file share
Version : 1.142.2
Author : Microsoft Corporation
Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=708390)
==============================================================================
##[warning]Directory '/home/vsts/work/1/a' is empty. Nothing will be added to build artifact 'drop'.
##[section]Finishing: PublishBuildArtifacts
Here is my pipeline definition
# ASP.NET Core
# Build and test ASP.NET Core projects targeting .NET Core.
# Add steps that run tests, create a NuGet package, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/dotnet-core
trigger:
- master
pool:
vmImage: 'Ubuntu-16.04'
variables:
buildConfiguration: 'Release'
steps:
# - script: dotnet build --configuration $(buildConfiguration)
# displayName: 'dotnet build $(buildConfiguration)'
- task: DotNetCoreInstaller#0
inputs:
version: '2.2.202' # replace this value with the version that you need for your project
- script: dotnet restore
- task: DotNetCoreCLI#2
displayName: Build
inputs:
command: build
projects: '**/*.csproj'
arguments: '--configuration Release' # Update this to match your need
- task: PublishBuildArtifacts#1
inputs:
ArtifactName: 'drop'
Note the 2 line I commented
# - script: dotnet build --configuration $(buildConfiguration)
# displayName: 'dotnet build $(buildConfiguration)'
are in fact part of the default script. I'm not using the default script. I'm following the tutorial https://learn.microsoft.com/en-us/azure/devops/pipelines/languages/dotnet-core?view=azure-devops
Also why I cannot use the templates available for my other projects. Is it because I'm using DevOps repository or because my project has specific settings? I have other project I can manage the build then deployment with graphical template and task. A lot more easier.
Yes, help on yaml pipelines seem a bit scattered and thin on the ground at the moment.
Since your project is AspNetCore, I think what you're missing is the dotnet publish task, after the build task and before the PublishArtifacts:
- task: DotNetCoreCLI#2
inputs:
command: publish
publishWebProjects: True
arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)'
zipAfterPublish: True
But here are steps I have been through trying to resolve frustration with netcore yaml pipelines:
You have already looked through the guide's example tasks & snippets at
Build, test, and deploy .NET Core apps ?
You have noticed that you can click on the build log to see the detailed output of each step in your pipeline?
You have noted that the task DotNetCoreCLI#2 is equivalent to running dotnet <command> on your own desktop so you can to some extent run/debug these tasks locally?
I found Predefined Variables gave some helpful clues. For instance it tells us that the path \agent\_work\1\a is probably the $(Build.ArtifactStagingDirectory) variable, so that helped me in mimicking the pipeline on my local machine.
Logically, your error message tells us that $(Build.ArtifactStagingDirectory) is empty when the pipeline reaches the last step. The dotnetcore example page suggests to me that publish is the task that populates it for a web project. For anything else, I think just the dotnet build task is enough.
Just replace in variables:
**/Dockerfile
…by
$(Build.SourcesDirectory)/Dockerfile
That works for me.

Gitlab pipeline is always failing with error message

I'm setting up the CI/CD for my .Net core project.
I have configured for the only build stage which is failing with below error message:
"MSBUILD : error MSB1003: Specify a project or solution file. The
current working directory does not contain a project or solution
file."
My solution structure is different. Project .SLN file is inside another folder its not available working directory.
My .SLN file is in Solution folder.
My solution structure image path:
Here my gitlab-ci.yml code looks like:
stages:
- build
before_script:
- 'dotnet restore'
build:
stage: build
script:
- dotnet build Solution/MyApp360.sln
only:
refs:
- master
- release
- develop
What I am missing here?
Am I passing the SLN file path wrongly?
How to pass the .SLN path to the build command.
Any example is appreciated.
If you are running your jobs on Windows in cmd executor, then you need to write your paths with '\' instead of '/'. The line would then be:
dotnet build Solution\MyApp360.sln

Build a maven project with Gitlab CI

I'm trying to get an existing Jenkins job working in Gitlab CI.
On Jenkins my project gets build via mvn clean package and the resulting war file is then moved to a tomcat container. Once finished, another job gets triggered to call a specific URL of this project to do some stuff which is unrelated to my question. When this is done, tomcat stops and the 2 jobs are finished.
How can I do that with Gitlab CI?
I started doing sth like
image: cdornbusch/tomcat-maven
stages:
- build
build:
script:
- "mvn clean tomcat:run"
- "echo 'TEST'"
stage: build
but I never see the echo 'TEST' which makes pretty sense since the task mvn tomcat:run never stops...but how can I build and deploy the project and then call a specific url of it??? Once the build is done, I don't need the tomcat instance anymore.
Just a side note, I use my own docker image which installs an Ubuntu with Maven, Java and Tomcat to fulfill my project requirements.