Bamboo variable replacement in batch file - bamboo

We're running Atlassian's Bamboo build server 4.1.2 on a Windows machine. I've created a batch file that is executed within a Task. The script is just referenced in a .bat file an not inline in the task. (e.g. createimage.bat)
Within the createimage.bat, I'd like to use Bamboo's PLAN variables. The usual variable syntax is not working, means not replaced. A line in the script could be for example:
goq-image-${bamboo.INTERNALVERSION}-SB${bamboo.buildNumber}
Any ideas?

You are using the internal Bamboo variables syntax, but the Script Task passes those into the operating system's script environment and they need to be referenced with the respective syntax accordingly, e.g. (please note the underscores between terms):
Unix - goq-image-$bamboo_INTERNALVERSION-SB$bamboo_buildNumber
Windows - goq-image-%bamboo_INTERNALVERSION%-SB%bamboo_buildNumber%
Surprisingly, I'm unable to find an official reference for the Windows variation, there's only Using variables in bash right now:
Bamboo variables are exported as bash shell variables. All full stops
(periods) are converted to underscores. For example, the variable
bamboo.my.variable is $bamboo_my_variable in bash. This is related to
File Script tasks (not Inline Script tasks).
However, I've figured the Windows syntax from Atlassian's documentation at some point, and tested and used it as documented in Bamboo Variable Substitution/Definition:
these variables are also available as environment variables in the Script Task for example, albeit named slightly different, e.g.
$bamboo_custom_aws_cfn_stack_StringWithRegex (Unix) or
%bamboo_custom_aws_cfn_stack_StringWithRegex% (Windows)

Related

How does environment variable substitution work in gitlab-ci.yml?

say that I have following definition, and script processValue is miraculously present on path:
script:
- processValue $CI_PROJECT_DIR
- processValue ${CI_COMMIT_REF_NAME+nice}
- processValue ${CI_COMMIT_REF_NAME/#release\//}
which process evaluates the variables? Will it be substituted somehow by gitlab? Or will gitlab just set defined variables as env variables and leaves substitution on default shell of given docker image? (meaning the last replacement will work only in bash)
The shell used to execute script lines depends on the os, and can be configured. For Linux environments, the default shell is bash.
But when it comes to expanding environment variables, things are a bit more complicated. Before the shell session which runs your scripts can be created, GitLab needs to be able to parse the ci file to evaluate triggers, determine the build environment, etc. Because of this, GitLab parses environment variables iteratively, with the rules in each round a bit different than a normal shell session, and a bit different from each other. From the docs:
There are three expansion mechanisms:
GitLab
GitLab Runner
Execution shell environment
In the GitLab stage,
The expanded part needs to be in a form of $variable, or ${variable} or %variable%. Each form is handled in the same way, no matter which OS/shell handles the job, because the expansion is done in GitLab before any runner gets the job.
GitLab Runner then takes another crack at expanding the additional set of variables available at runtime, such as e.g. CI_BUILDS_DIR.
Again from the docs:
GitLab Runner internal variable expansion mechanism
Supported: project/group variables, .gitlab-ci.yml variables, config.toml variables, and variables from triggers, pipeline schedules, and manual pipelines.
Not supported: variables defined inside of scripts (for example, export MY_VARIABLE="test").
The runner uses Go’s os.Expand() method for variable expansion. It means that it handles only variables defined as $variable and ${variable}. What’s also important, is that the expansion is done only once, so nested variables may or may not work, depending on the ordering of variables definitions, and whether nested variable expansion is enabled in GitLab.
Finally, the shell executed script lines with the full context.
See the GitLab-ci runner docs and the GitLab runner environment variables docs for more information and configuration options.

How do I pass this common property to MSBuild using TeamCity?

I am using the TeamCity Visual Studio runner. I want to add a setting that is not accessible from Visual Studio.
/Property:FileAlignment=4096
I typed that directly into the build step "Command line parameters." The build log shows the error:
MSBuild command line parameters contains "/property:" or "/p:" parameters. Please use Build Parameters instead.
I don't understand how to provide this to MSBuild from TeamCity and get rid of this warning!
1. Which kind of parameter should I use?
There are 3 kinds:
Configuration parameters
System properties
Environment variables.
I don't want an environment or system variable because I don't want this build to depend on anything external. I am going to try Config right now, but then I'm not sure I'm filling it in right.
2. How can I tell this parameter is actually getting used?
The build log, which seems only to have navigable/foldable xml-like levels with their program, did not say the build parameters.
You should use "System properties". Don't worry about the name, that's just how TeamCity calls it. They are regular properties. You can add them in "Edit Configuration Settings > 7. Build Parameters".
For example, you can add the system property as follows:
Name: system.FileAlignment
Type: System property (system.)
Value: 4096
Note that TeamCity will insist on the "system." prefix. It doesn't matter because the MSBuild script will still see it as $(FileAlignment).
The TeamCity documentation defines Build Parameters as "a convenient way of passing generic or environment-specific settings into the build script". Configuration parameters provide a way to override some settings in a build configuration inherited from a template. They are never passed to a build. System and Environment parameters are provided to your build script. Environment variables are actually set on the system (I can't find any documentation for this). System parameters are passed to the script engine.
TeamCity automatically provides System variables to the actual command line (it looks like the Visual Studio runner runs msbuild.exe and not devenv.exe). I guess that TeamCity is constructing a command like
cmd> msbuild.exe my-solution.sln /p:FileAlignment=4096
I tried this on my command line, just to make sure that it should work (I added the /v:diagnostic flag). The diagnostic verbosity makes msbuild print all of it's properties to the console. I verified that FileAlignment=4096 was in there.
That /FileAlignment property appears to be a special property that's automatically in any .csproj file. So you should be good to go. You can check the actual parameters that were passed to the build by clicking on any build and viewing the 'Build Parameters' tab. There's a section that shows the "Actual Parameters on Agent".
This was solved. To clarify, Anthony told how to solve the problem in the commandline using MSBuild. It can also be solved on the commandline using devenv, per a ticket with Microsoft, the syntax is:
devenv ..\..\mysolution.sln /Rebuild /Property:Config=Release;Platform=AnyCPU;Filealignment=512
What I wanted, however, was to get Teamcity's "Visual Studio Build" to accept the parameter. This was achieved as follows. In the box for Command line parameters, I entered:
/Property:FileAlignment=filealignment v:diag
Then the output tab for Build Parameters shows:
User Defined Parameters
Name Value passed to build
system.filealignment 512
system.verbosity diagnostic
(This is -754 chars for a comment so must be typed as a post)
hi Anthony, Thank you for replying!
Yes, msbuild on the commandline works fine for me as well and project files may store FileAlignment properties. In our case, upon discussion with Microsoft, it appears necessary that I specify the solution-wide aka build-wide alignment, ie in the command arguments, in addition to fixing the projects (which I have already done).
No parameter that I specify on the GUI item ( /Build Step / Command line parameters/ ) will appear on the tab /Build Parameters/. Of course some will not compile at all.
Also I have even more weird behavior where using
/verbosity:diagnostic
vs
/verbosity:minimal
causes a longer build log for the minimal! It appears diagnostic is hiding the details inside of a special task, which is part of Teamcity and not me;
[16:24:05]: Overriding target "SatelliteDllsProjectOutputGroup" in project "C:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets" with target "SatelliteDllsProjectOutputGroup" from project "C:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.WinFX.targets".
I am struggling with this because the Teamcity-generated build output log is so nice to have as a TreeView. That works with the SLN build but using any bat file cannot produce log file with the pretty (xml, presumably) tree-format.
If you have further ideas I will love to hear them, and thank you for your edits! :)

How to specify LD_LIBRARY_PATH (or PATH) for each project in Netbeans?

In order to load .DLLs (under Windows) or .SOs (under Linux) we must use the environment variables PATH (Windows) or LD_LIBRARY_PATH (Linux).
The only way we could find to properly use DLLs and SOs was to define the environment variables before starting Netbeans.
Is there a way to specify those environment variables inside
Netbeans?
Is it possible to specify it inside the project
properties? That way each project could have its own definitions.
is there a way to just append to those environment variables instead of just overriding them?
Background: we are developing a Java program that uses JNI to access native libraries. Those native libraries, in turn, access other dependent native libraries. Because of that, just setting the property "java.library.path" doesn't work, as we need to set the full LD_LIBRARY_PATH (or regular PATH in the case of Windows), too.
Outside Netbeans the application runs fine, because we set the environment variables inside shell scripts.
We don't want to just place the DLLs or SOs in the usual system directories because we don't want to mess up with the operating system installation during development. In addition, we want to have the flexibility to allow any developer to simply get the project from source control (Mercurial) and have all relative paths just working.
There is already a hack on stack overflow to set environment variables programmatically in Java. However, we are looking for less hackish a solution.
You can override Ant script tasks that NetBeans uses in build.xml file (or edit it directly in the full script in nbproject/build-impl.xml, but not recommended).
The java task is used on run target. You can use env parameter to specify environment variables to the process that will run the JVM.

How to compile a linux shell script to be a standalone executable *binary* (i.e. not just e.g. chmod 755)?

I'm looking for a free open source tool-set that will compile various "classic" scripting languages, e.g. Korn Shell, ksh, csh, bash etc. as an executable -- and if the script calls other programs or executables, for them to be included in the single executable.
Reasons:
To obfuscate the code for delivery to a customer so as not to reveal our Intellectual Property - for delivery onto a customer's own machine/systems for which I have no control over what permissions I can set regarding access, so the program file has to be binary whereby the workings cannot be easily seen by viewing in a text editor or hexdump viewer.
To make a single, simply deployed program for the customer without/or a minimal amount of any external dependencies.
I would prefer something simple without the need for package manager since:
I can't rely on the customer's knowledge to carry out (un) packaging instructions and
I can't rely on the policies governing their machines regarding installing packages (and indeed from third parties).
The simplest preferred approach is to be able to compile to proper machine code a single executable that will run out of the box without any dependencies.
The solution that fully meets my needs would be SHC - a free tool, or CCsh a commercial tool. Both compile shell scripts to C, which then can be compiled using a C compiler.
Links about SHC:
https://github.com/neurobin/shc
http://www.datsi.fi.upm.es/~frosal/
http://www.downloadplex.com/Linux/System-Utilities/Shell-Tools/Download-shc_70414.html
Links about CCsh:
http://www.comeaucomputing.com/faqs/ccshlit.html
You could use this: http://megastep.org/makeself/
This generates a shell script that auto-extracts a bundled tar.gz archive into the temporary directory, and then can run an arbitrary command upon extraction.
Using this tool, you can provide only one shell script to the client.
This script will then extract your ofbsh obfuscated scripts and binaries into /tmp, and run them transparently.
You can obfuscate shell scripts with something like ofbsh. You won't easily bundle other programs into a single executable for unix, though. Normally the approach for installation would be to buld a package for your platform's package manager (e.g. rpm, deb, pkg) or to provide a tarball to unravel in the appropriate directory.
If you need an executable file that unpacks the contents you might be able to use a shell archive. Take a look at the docs for shar(1) and see if that will get what you want
If you really need a scripting capability to glue multiple C programs together, take a look at the Tcl language. It has an API that is designed to trivially wrap C programs that expect to see argv[] style parameters. You can even embed the chunks of C code into a custom Tcl interpreter and glue it together with various Tcl scripts.
If you really need to make it opaque, you could encrypt the tcl scripts and wrap the whole thing in something that unencrypts the tcl scripts to a buffer and then runs the Tcl interpreter on them. Tcl can accept scripts from a file or a char* buffer, so the unencrypted scripts never have to hit the file system.
shc
I have modified the original source and upgraded to a new version with some feature addition and bug fixes.
It's here.
Example Usage:
shc -f script.sh -o binary_name
script.sh will be compiled to a binary named binary_name
Note that, you still need the required shell to be installed in your system to run this executable.
arx is a great bundler, and you may be able to integrate a obfuscator in its workflow.
Options that are available to you:
Write a logic in your code that, when the code is run for the first time on a box, it'll check to see if all the required packages exist. And if they do not, the code will automatically go get the packages itself and will install them...without asking to the user to do anything. The only question the user needs to be asked is "Is it ok to proceed with the install of the aforementioned packages? (Y/N)". Anything outside of that is too much.
Once the above code is complete (yes, i'm aware it may not be all that simple for you to code this, or may be it is, i don't know your coding capabilities), copy and paste your completed code to a site like kinglazy.com and an actual executable file will be generated for you.
There are quite a few benefits of this particular option:
Yes, you will be able to run the encrypted version of your script without exposing any proprietary information.
No one can try to "view" your script, because if they do, they'll see nothing but indecipherable, encrypted jargon which wont make sense to them.
No one can attempt to modify your script because if they do, the script will immediately become inoperable.
No one can run a debugger on your script to see how it works. If they do, the script will abort.
Also, no one can create copies of your script on the same server. If they do, it will abort and won't work. It'll only allow users to create symlinks to the original location of wherever you want the script to be.
I may be missing some things in what you asked for, but i believe the above satisfies a good portion of what you wanted.
Not sure if this works on other scripts but it certainly does for shell scripts.
You can also use the free online version of CCsh to compile a shell script into a binary:
http://www.comeaucomputing.com/tryccsh/

Using a variable obtained using a pre-build shell command to set an option for the Maven build in Hudson

I have a Hudson job that runs a maven goal. Before this maven goal is executed I have added a step to run before the build starts, it is a shell script that obtains the version number that I want to use in the 'Goals and options' field.
So in my job configuration, under Build Environment I have checked the Configure M2 Extra Build Steps box and added a shell script before the build. The script looks like this:
export RELEASE={command to extract release version}
echo $RELEASE
And then under the Build section I point to my 'root pom'. In the Goals and options I then want to be able to do something like this:
-Dbuild.release.version=${RELEASE} deploy
Where build.release.version is a maven property referenced in the POM. However since the shell doesn't seem to make its variables global it doesn't work. Any ideas?
The only one I have is to install the Envfile plugin and get the shell script to write out the RELEASE property to a file and then get the plugin to read the file, but the order in which everything is run may cause problems and it seems like there must be simpler way...is there?
Thanks in advance.
I recently wanted to do the same, but AFAIK it's not possible to export values from a pre-build shell to the job environment. If there is a Hudson Plugin for this I've missed it.
What did work, however, was a setup similar to what you were suggesting: having the pre-build shell script write the desired value(s) to a property-file in the workspace, and then using the Parametrized Trigger Plugin to trigger another job that actually does the work (in your case, invoke the Maven job). The plugin can be configured to read the parameters it passes from the property file. So the first job has just the shell script and the post-build triggers, and the second one does the actual work, having the correct parameters available as environment variables.
General idea of the shell script:
echo "foo=bar
baz=`somecmd`" > build.properties
And for your Goals and options, something like:
-Dbuild.release.version=${foo} deploy
Granted, this isn't as elegant as one might want but worked really well for us, since our build was broken into several jobs to begin with, and we can actually reuse the other jobs that the first one triggers (that is, invoke them with different parameters).
When you say it doesn't work, do you mean that your RELEASE variable is not passed to the maven command? I believe the problem is that by default, each line of the shell script is executed separately, so environment variables get lost.
If you want the entire shell script to execute as if it was one script file, make the first line:
#!/bin/sh
I think this is described in the Help information alongside the shell script build step (and if I'm wrong, that's a good place to look for the right syntax).