say that I have following definition, and script processValue is miraculously present on path:
script:
- processValue $CI_PROJECT_DIR
- processValue ${CI_COMMIT_REF_NAME+nice}
- processValue ${CI_COMMIT_REF_NAME/#release\//}
which process evaluates the variables? Will it be substituted somehow by gitlab? Or will gitlab just set defined variables as env variables and leaves substitution on default shell of given docker image? (meaning the last replacement will work only in bash)
The shell used to execute script lines depends on the os, and can be configured. For Linux environments, the default shell is bash.
But when it comes to expanding environment variables, things are a bit more complicated. Before the shell session which runs your scripts can be created, GitLab needs to be able to parse the ci file to evaluate triggers, determine the build environment, etc. Because of this, GitLab parses environment variables iteratively, with the rules in each round a bit different than a normal shell session, and a bit different from each other. From the docs:
There are three expansion mechanisms:
GitLab
GitLab Runner
Execution shell environment
In the GitLab stage,
The expanded part needs to be in a form of $variable, or ${variable} or %variable%. Each form is handled in the same way, no matter which OS/shell handles the job, because the expansion is done in GitLab before any runner gets the job.
GitLab Runner then takes another crack at expanding the additional set of variables available at runtime, such as e.g. CI_BUILDS_DIR.
Again from the docs:
GitLab Runner internal variable expansion mechanism
Supported: project/group variables, .gitlab-ci.yml variables, config.toml variables, and variables from triggers, pipeline schedules, and manual pipelines.
Not supported: variables defined inside of scripts (for example, export MY_VARIABLE="test").
The runner uses Go’s os.Expand() method for variable expansion. It means that it handles only variables defined as $variable and ${variable}. What’s also important, is that the expansion is done only once, so nested variables may or may not work, depending on the ordering of variables definitions, and whether nested variable expansion is enabled in GitLab.
Finally, the shell executed script lines with the full context.
See the GitLab-ci runner docs and the GitLab runner environment variables docs for more information and configuration options.
Related
The fifth factor of the 12-factor app manifesto is "build, release, run". It says that
the release stage takes the build produced by the build stage and combines it with the deploy's current config. The resulting release contains both the build and the config...
The third factor "Config" explains that the configuration is stored in environment variables. It sounds like a contradiction to me.
If the config is stored in environment variables, how can it be contained in the release? A docker file is the only possibility I could think of, but this would be specific to Docker.
I have a task to deploy aspnet core React application to 2 different environments: development and production environments. Each of this environments should be configured separately.
I use Azure devops for CI/CD
AspNet project contains following commands for building application
<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" />
I use adal for authorization that is why I have to pass some secret variables that are different for Dev and Prod
const adalConfig = {
tenant: process.env.REACT_APP_TENANT,
clientId: process.env.REACT_APP_CLIENT_ID,
redirectUri: process.env.REACT_APP_REDIRECT_URI,
In Azure devops I set params with command:
echo ##vso[task.setvariable variable=REACT_APP_TENANT;isOutput=true]c00000-00ce-000-0f00-0000000004000
in the azure devops I have next standard commands for aspnet core build app
.Net core installer
Resore
run command (to set env variables)
Build
publish
Issues:
Environment variable is not set.
I even don't know how to build another artefact for production, but not for development.
Maybe you already had task to deploy core react app to 2 different environments? Or please give advice if I need to change deployment strategy at all.
The only solutions what I found is to use .env file but I have to commit this file to git - to deploy it from master. And I still don't know how to use different files for dev and prod.
TLDR;
You have isOutput=true in your task.setvariable command. This only sets a variable in Pipelines engine, to be available to other steps, but doesn't actually map to an env variable. Remove isOutput and you shall see REACT_APP_TENANT env variable.
But only in the following steps - the env variable is not immediately accessible in the same pipeline step.
You can define variables at pipeline level if you know their values upfront - that should simplify things. task.setvariable is more useful for dynamic variables.
If you need different process (or a different set of variables) for different environments, I recommend using multistage YAML pipelines or classic Releases. They both allow for setting up different stages, each with their set of variables.
Long story
We need to distinguish two separate processes:
Deployment pipeline that's executed on CI agent
Web application that may be hosted in many different ways - Azure Web Apps, self hosting, docker/k8s, etc.
Doing echo ##vso[task.setvariable ...] sets the variable in the pipeline (1.).
The place where reading variables takes place (like tenant: process.env.REACT_APP_TENANT) isn't that obvious. If it's nodejs server-side code, it'll be executed in 2. If it's a part of some build script, it'll be read in 1.
React is tricky, because:
It react behaves differently in development and release mode. In Release mode, during the build phase, the whole client-side code is compiled down to a static JS file. So the env variables you set in your pipeline should work.
It cannot simply access any env variable (to protect you from exposing your server env variables on client browser by accident). If using create-react-app (which is what ASP.NET React App does by default), you have to prefix env variables with REACT_APP_ to use them. If using Webpack directly, you'll need some plugin for this.
I am new to teamcity and need some help in setting up the parameterized build.We are using jenkins to run regression project written in SoapUI and through paramiterized build options we are able to pass parameter(such as test environment,mercurial banch it has to pool the changes from, test suites in case we have executes only selected ones and tags) to the batch script that executes the testrunnner.bat on command line. How can I do the same stuff in teamcity?
I can see teamcity allows three types of parameters env,system and config.Out of which system parameters are passed to the build script.Can I specify all these required parameters as system parameter?
Also I need to supply these parameters for every build but the values may differ.Does teamcity provides facility same as jenkins that would provide a GUI where I can change these values?
Yes, you can use system properties for required parameters that should be passed inside build script.
To specify what mercurial branches to monitor you can configure Feature Branches.
You can run a build manually in TeamCity and customize needed parameters, see how to Trigger Custom Build.
We're running Atlassian's Bamboo build server 4.1.2 on a Windows machine. I've created a batch file that is executed within a Task. The script is just referenced in a .bat file an not inline in the task. (e.g. createimage.bat)
Within the createimage.bat, I'd like to use Bamboo's PLAN variables. The usual variable syntax is not working, means not replaced. A line in the script could be for example:
goq-image-${bamboo.INTERNALVERSION}-SB${bamboo.buildNumber}
Any ideas?
You are using the internal Bamboo variables syntax, but the Script Task passes those into the operating system's script environment and they need to be referenced with the respective syntax accordingly, e.g. (please note the underscores between terms):
Unix - goq-image-$bamboo_INTERNALVERSION-SB$bamboo_buildNumber
Windows - goq-image-%bamboo_INTERNALVERSION%-SB%bamboo_buildNumber%
Surprisingly, I'm unable to find an official reference for the Windows variation, there's only Using variables in bash right now:
Bamboo variables are exported as bash shell variables. All full stops
(periods) are converted to underscores. For example, the variable
bamboo.my.variable is $bamboo_my_variable in bash. This is related to
File Script tasks (not Inline Script tasks).
However, I've figured the Windows syntax from Atlassian's documentation at some point, and tested and used it as documented in Bamboo Variable Substitution/Definition:
these variables are also available as environment variables in the Script Task for example, albeit named slightly different, e.g.
$bamboo_custom_aws_cfn_stack_StringWithRegex (Unix) or
%bamboo_custom_aws_cfn_stack_StringWithRegex% (Windows)
I have a Hudson job that runs a maven goal. Before this maven goal is executed I have added a step to run before the build starts, it is a shell script that obtains the version number that I want to use in the 'Goals and options' field.
So in my job configuration, under Build Environment I have checked the Configure M2 Extra Build Steps box and added a shell script before the build. The script looks like this:
export RELEASE={command to extract release version}
echo $RELEASE
And then under the Build section I point to my 'root pom'. In the Goals and options I then want to be able to do something like this:
-Dbuild.release.version=${RELEASE} deploy
Where build.release.version is a maven property referenced in the POM. However since the shell doesn't seem to make its variables global it doesn't work. Any ideas?
The only one I have is to install the Envfile plugin and get the shell script to write out the RELEASE property to a file and then get the plugin to read the file, but the order in which everything is run may cause problems and it seems like there must be simpler way...is there?
Thanks in advance.
I recently wanted to do the same, but AFAIK it's not possible to export values from a pre-build shell to the job environment. If there is a Hudson Plugin for this I've missed it.
What did work, however, was a setup similar to what you were suggesting: having the pre-build shell script write the desired value(s) to a property-file in the workspace, and then using the Parametrized Trigger Plugin to trigger another job that actually does the work (in your case, invoke the Maven job). The plugin can be configured to read the parameters it passes from the property file. So the first job has just the shell script and the post-build triggers, and the second one does the actual work, having the correct parameters available as environment variables.
General idea of the shell script:
echo "foo=bar
baz=`somecmd`" > build.properties
And for your Goals and options, something like:
-Dbuild.release.version=${foo} deploy
Granted, this isn't as elegant as one might want but worked really well for us, since our build was broken into several jobs to begin with, and we can actually reuse the other jobs that the first one triggers (that is, invoke them with different parameters).
When you say it doesn't work, do you mean that your RELEASE variable is not passed to the maven command? I believe the problem is that by default, each line of the shell script is executed separately, so environment variables get lost.
If you want the entire shell script to execute as if it was one script file, make the first line:
#!/bin/sh
I think this is described in the Help information alongside the shell script build step (and if I'm wrong, that's a good place to look for the right syntax).