I have a little confusion in GitLab scheduled pipeline variables. I see there is a section to add variables when scheduling a pipeline as shown below.
However, there is a section to add variables for the whole project in Settings -> CI/CD -> Variables, as shown below.
My question is does the scheduled pipeline gets access to the variables defined under the settings anyway? I think the variable section under the scheduled pipeline is given to add extra variables or override existing variable. Thank you in advance.
you can define variables in different levels in gitlab CI based on the relevance or best place they are needed. the variables are taken based on the specificity.
project variables can override group variables.
Schedule variables can override project variables as they're similar to passing variables via manual job.
Related
In azure devops, inside the classic build, I can generally do $(myVar) to get the value of a variable in certain places. I am not sure if that particular usage has a name.
Is there a way to pass an expression for the same use cases. I mean instead of $(myVar) can I do something like $(coalesce(myVar, otherVar))?
I have tried wrapping it in different brackets, doesn't seem to work.
I have checked the docs here: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops
It doesn't show how to use in the classic pipelines, only yaml.
Is there a way to pass an expression for the same use cases. I mean instead of $(myVar) can I do something like $(coalesce(myVar,
otherVar))?
Agree with Daniel, the common use of Expressions is to define conditions for a step, job, or stage or define variables. The expressions work well in Yaml pipelines while it's not supported in Classic pipelines if you want to define variables using $(coalesce(myVar, otherVar)) instead of $(myVar).
The $(coalesce(...))is one of the built-in functions. The only working scope of those functions in classic pipelines is conditions for Job/Task, see:
Job:
Task:
But it seems you're trying to use built-in functions when defining variables, for now that's not supported in classic pipelines. Those can only be used to define/control the conditions for job/task in classic pipelines.
Expressions as outlined in the documentation you linked only apply to YAML. You won't be able to do what you want to do unless you use YAML.
While using expressions in classic pipelines is not supported (except in the control section of the task), I've had success defining a release variable with the expression, and using that release variable instead.
In my case I wanted to do releases based on git tags, and extract the version number from the tag.
Variable definition (with the expression I want to use):
Using the variable in the task:
I am using IntelliJ to develop Java applications which uses YAML files for the app properties. These YAML files have some placeholder/template params like:
credentials:
clientId: ${client.id}
secretKey: ${secret.key}
My CI/CD pipeline takes care of substituting the actual value for these params (client.id and secret.key) based on the environment on which it is getting deployed.
I'm looking for something similar in IntelliJ. Something like, I configure some static/fixed values for the params (Ex: client.id and secret.key) within the IDE and when I run locally using the IDE, these values should be substituted onto these YAML files and run.
This will actually save me from updating the YAML files with the placeholder params each time I check in some other changes to my version control system.
There is no such feature in IDEA, because IDEA cannot auto detect every possible known or unknown expression language or template macros that you could use in a yaml file. Furthermore, IDEA must create a context for that or these template files.
For IDEA it's just a normal yaml file.
IDEA has a language injection feature.
That can be used to inject sql into a java string for instance or inject any language into a yaml field.
This is a really nice feature and can help you to rename sql column names aso. but this won't solve your special problem, because you want to make that template "runnable" within in certain context where you define your variables.
My suggestion would be, to write a small simple program that makes nearly the same as the template engine does.
When you only need simple string replacements and no macro execution then this could be done via regular expression.
If it's more complicated I would use the same template engine as the "real processor" does.
If you want further help, it would be good to know how your yaml processing pipeline looks like.
I have a Snakefile with several rules and only a few need more than 1 GB/core to run on a cluster. The resources directive is great for this, but I can't find a way of setting a default value. I would prefer not having to write resources: mem_per_cpu = 1024 for every rule that doesn't need more than the default.
I realize that I could get what I want using __default__ in a cluster config file and overriding the mem_per_cpu value for specific rules. I hesitate to do this because the memory requirements are platform-independent, so I would prefer including them in the Snakefile itself. It would also prevent me from being able to specify local resource limits using the --resources command-line option.
Is there a simple solution with Snakemake that would help me here? Thanks!
I was reading the changelog of the Snakemake and I came across this:
Add –default-resources flag, that allows to define default resources
for jobs (e.g. mem_mb, disk_mb), see docs.
My goal is to create build definitions within Visual Studio Team Services for both test and production environments. I need to update 2 variables in my code which determine which database and which blob storage the environment uses. Up till now, I've juggled this value in a Resource variable, and pulled that value in code from My.Resources.DB for a library, and Microsoft.Azure.CloudConfigurationManager.GetSetting("DatabaseConnectionString") for an Azure worker role. However, changing 4 variables every time I do a release is getting tiring.
I see a lot of posts that get close to what I want, but they're geared towards C#. For reasons beyond my influence, this project is written in VB.NET. It seems I have 2 options. First, I could call the MSBuild process with a couple of defined properties, passing them to the .metaproj build file, but I don't know how to get them to be used in VB code. That's preferable, but, at this point, I'm starting to doubt that this is possible.
I've been able to set some pre-processor constants, to be recognized in #If-#Else directives.
#If DEBUG = True Then
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.xxx")
#Else
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.133")
#End If
msbuild CalbertNG.sln.metaproj /t:Rebuild /p:DefineConstants="DEBUG=False"
This seems to work, though I need to Rebuild to change the value of that constant. Should I have to? Should Build be enough? Is this normal, or an indication that I don't have something set quite right?
I've seen other posts that talk about pre-processing the source files with some other builder, like Ant, but that seems like overkill. It feels like I'm close here. But I want to zoom out and ask, from a clean sheet of paper, if you're given 2 variables which need to change per environment, you're using VB.NET, and you want to incorporate those variable values in an automated VS Team Services build process upon code check-in, what's the best way to do it? (I want to define the variables in the VSTS panel, but this just passes them to my builder, so I have to know how to parse the call to MSBuild to make these useful.)
I can control picking between 2 static strings, now, via compiler directives, but I'd really like to reference the Build.BuildNumber that comes out of the MSBuild process to display to the user, and, if I can do that, I can just feed the variables for database and blob container via the same mechanism, and skip the pre-processor.
You've already found the way you can pass data from the MsBuild Arguments directly into the code. An alternative is to use the Condition Attribute in your project files to make certain property groups optional, it allows you to even include specific files conditionally. You can control conditions by passing in /p:ConditionalProperty=value on the MsBuild command. This at least ensures people use a set of values that make sense together.
The problem is that when MsBuild is running in Incremental mode it is likely to not process your changes (as you've noticed), the reason for this, is that the input files remain unchanged since the last build and are all older than the last generated output files.
To by-pass this behavior you'd normally create a separate solution configuration and override the output location for all projects to be unique for that configuration. Combined with setting the Compiler constants for that specific configuration you're ensured that when building that Configuration/Platform combination, incremental builds work as intended.
I do want to echo some of the comments from JerryM and Daniel Mann. Some items are better stored in else where or updated before you actually start the compile phase.
Possible solutions:
Store your configuration data in config files and use Configuration Transformation to generate the right config file base don the selected solution configuration. The process is explained on MSDN. To enable configuration transformation on all project types, you can use SlowCheetah.
Store your ocnfiguration data in the config files and use MsDeploy and specify a Parameters.xml file that matches the deploy package. It will perform the transformation on deploy time and will actually allow your solution to contain a standard config file you use at runtime, plus a publish profile which will post-process your configuration. You can use a SetParameters.xml file to override the variables at deploy time.
Create an installer project (such as through Wix) and merge the final configuration at install time (similar to the MsDeploy). You could even provide a UI which prompts for specific values (and can supply default values).
Use a CI server, like the new TFS/VSTS 2015 task based build engine and combine it with a task that can search&replace tokens, like the Replace Tokens task, Tokenization Task, Colin's ALM Corner Build and Release Tasks. And a whole bunch that specifically deal with versioning. Handling these things in the CI server also allows you to do a quick build locally at all times and do these relatively expensive steps on the build server (patching source code breaks incremental build in MsBuild, because there are always newer input files.
When talking specifically about versioning, there are a number of ways to set the AssemblyVersion and AssemblyFileVersion just before compile time, usually it involves overriding the AssemblyInfo.cs file before compilation. Your code could then use reflection to read the value at runtime. You can use the AssemblyInformationalversion to specify something like you do in the example above which contains .xxx or other text. It also ensures that the version displayed always reflects the information obtained when reading the file properties through Windows Explorer.
I have written and maintained a lot of Nant/Msbuild files, and have got a few problems in a current project where it seems that rake would make my life a lot easier. However I am a bit stumped on the first hurdle, but this may be because I am looking at the problem through Nant eyes.
To give some context here is how I normally expect my builds to be laid out (forgetting the actual physical structure, just look at what the components of it are):
- Root
| - build-scripts
| - default.build
| - *.build
| - build-properties
| - dev.properties
| - live.properties
| - *.properties
| - tools
| - Nant
| - Nant.exe
| - *.*
| - Nunit
| - Nunit.exe
| - *.*
| - **/*.*
Now as you can see above the main components of a build are the build scripts, which contain the actual build instructions, the build properties, which are files containing purely the properties per environment. i.e
dev.properties may contain web.service.url = "http://some.dev.address"
live.properties may contain web.service.url = "http://some.live.address"
Then there are the tools which are external executables used by the build scripts, such as Nant, Nunit, JsTestDriver etc.
Now focusing on my default.build file, this tends to contain a lot of upfront properties, such as directories used (i.e libs, output, package, project, tests) so they can all be evaluated up front. Then the other *.build files contain relevant build scripts, such as tests.build would deal with Nunit and run Unit/Integration/Acceptance tests etc.
The main problem for me here is that I have these environmental properties which configure the build for each environment, and then a lot of pre-defined properties which are overridden when invoked from the command line, i.e you would pass environment=live if you were building for a live environment, or if you left it blank it would default the environment to dev, and on the CI server it would set the environment to ci.
None of the examples I can find seem to tell me how to have default properties/variables that are overridden by the command line with Rake, I know you can set properties and get them out via the Env[] mechanism, but to use this I would need to set a global variable as normal then have a step which checks any env vars passed in and overwrite the global variables with those properties, also the only way I can see to allow external configuration files is to use global variables within them and include it, which I don't mind doing but I was hoping there was some best practises in this area I could learn from.
Also the final thing I was thinking about was that really the only variable to be passed in would be the environment (dev,ci,live etc) so I could make it so the default build task required an argument to be passed in, which is supported, but not sure if this is best as I would want it to run as "dev" if nothing was set, and this means you would always need to set one (not end of the world).
As you can see there are quite a few questions in this area, taking my existing approach and trying to adapt it to work with Rake. Any advice or information would be great!
This isn't the proper answer, but I have basically kept the same soft of structure as above and used global variables for everything. So I have a properties file which contains something like:
$dir["project"] = "c:/Some/Project/Dir"
$dir["tests"] = "c:/Some/Tests/Dir"
$settings["auto-migrate-deltas"] = true
It is not really ideal, but it allows me to inject different properties based on the environment, it also makes alot of the build files less re-usable as they all use these global variables. Using parameter based tasks may make it more re-usable but there isn't a huge deal of documentation in this area and working examples, so I am just going to stick with the following system until I find a major problem.