Bamboo custom deployment-project variables - bamboo

In Bamboo, I want so take a build-release and deploy it on a target host. The target host should be variable.
As far as I know, it is not possible to run deployment-projects with customized deployment-variables (as it is possible to override plan-variables on custom-builds). My question is: is that true and if yes, what is the best way to achieve what I want?
Here are some thoughts I had during research regarding this issue:
I could use a plan-variable "host" in my build job and always customize it as needed. Then I write this variable into a file that will be declared as a build-artifact. In my deployment-tasks I use the "Inject Bamboo variables configuration" task to get the variable. This solution has the disadvantage, that I always have to run a build, even if the artifacts do not change.
Global variables are not feasible because they are not build-dependent. Therefore I can not use them for my task. The reason is that it could happen that they get overwritten by another build.
Are there any better solutions/thoughts on this task?

target host should be variable
No, each host is a separate environment. Otherwise the notion of "promoting an environment" breaks apart. This may be a lot of work to implement and therefore I strongly advise using bamboo specs (in Java).
it is not possible to run deployment-projects with customized deployment-variables
I confirm: it's not possible. And again, it would break the notion of environment promotion. Rule of thumb: your environment setup should be immutable: no variable variance. The only difference b/n runs is the artifacts that are to be deployed.

You can set variables in 'Other environment settings' in deployment project while configuring Environment. This way you will not have to run build when artifacts don't change. Just change variable value before deploying the artifact.

Related

In QTP, can we only use shared object repository and disable local repository?

I just started to use QTP to test a java monitor we have. I think it will be cleaner that if put all the objects in shared repository and do not use local repository at all. Is there any down side that of doing that?
No. Well, it depends.
There is no reason to disable the local repo. There is not even a way to do that! But yes, you can simply avoid using it.
I usually have a rule active in the team that reads "whatever is in the local repo should be gone upon check-in". I even have a check in the library initialization code that looks if there is stuff in the local repository and yields a warning if so.
But it depends on the AUT if this is useful, and on your workflow:
The repo that a test is actually using is the combination of the associated (shared, central) repository, "overlaid" by what is in the local repo. So you can not only add, but also modify the shared repo entries by using the local one. This is a quite powerful feature which allows you to define the "norm" case in the shared, and the exceptional cases in the local repository.
If you AUT has well-defined GUI objects which are easily re-identify-able in various contexts, fine. But if not, this feature comes handy.
I agree that this "overlaying" mechanism easily leads to "bugs" (or let´s say: playback problems) that are hard to track down. Usually, as Murphy suggests, the local repo is the last idea that comes to mind when you are diagnosing strange playback symptoms.
It is, however, quite some clickwork to open the central repo through the object repo manager, check it out, make it writeable, etc. for every small change. So the local repo is a good "buffer" for updates.
Consequently, especially if you are using QC as a central storage point, and possibly have enabled version control there, you will love the ability to add new stuff into the local repo first and move them over to the central repo in one load just before checking in the change. (By the way, other team members would kill you if you´d checkout the central repo file for more than 5 minutes, effectively putting a write-lock upon it.)

How do I run just a single stage in my bamboo build?

I have a bamboo build with 2 stages: Build&Test and Publish. The way bamboo works, if Build&Test fails, Publish is not run. This is usually the way that I want things.
However, sometimes, Build&Test will fail, but I still want Publish to run. Typically, this is a manual process where even though there is a failing test, I want to push a button so that I can just run the Publish stage.
In the past, I had two separate plans, but I want to keep them together as one. Is this possible?
From the Atlassian help forum, here:
https://answers.atlassian.com/questions/52863/how-do-i-run-just-a-single-stage-of-a-build
Short answer: no. If you want to run a stage, all prior stages have to finish successfully, sorry.
What you could do is to use the Quarantine functionality, but that involves re-running the failed job (in yet-unreleased Bamboo 4.1, you may have to press "Show more" on the build result screen to see the re-run button).
Another thing that could be helpful in such situation (but not for OP) is disabling jobs.
Generally speaking, the best solution to most Bamboo problems is to rely on Bamboo as little as possible because you ultimately can't patch it.
In this case, I would just quickly write / re-use a aynchronous dependency resolution mechanism (something like GNU Make and its targets), and run that from a single stage.
Then just run everything on the default all-like target, and let users select the target on a custom run variable.

What is an enviromental Variable in Operating System Vs IDE's?

I'm getting confused of how to place an environmental variable in my head. So it's like a global variable for the operating system? So a global for an IDE is just a reference for something the IDE needs to be able to allocate. Is this the correct idea to have?
At execution each process has an image. It consists of a set of variables in memory that can be accessed, read and modified at all time. It defines the state of a process at a certain point in time.
Some of these variables are created by the process itself for its internal logic, and others will be inherited from its parent process (usually the shell that's used to launch it). They usually describe a certain state of the system. They are called the environment variables. You can use them to define stuff like:
- location of certain executables on your computers (like java or python).
- location of some shared libraries (or DLL if you're on Windows).
- location of the database you're querying
- permission and user access (although that's not the safest place to define it).
The point is, you can define anything in your environment, and every process launched with this environment will inherit from these variables.
I don't know what you mean with the IDE, and quite frankly am not sure I completely understood your question. But since it's been a while and no one answered, I'm hoping it could help you get a beginning of an answer.

Is there a way to 'test run' an ant build?

Is there a way to run an ant build such that you get an output of what the build would do, but without actually doing it?
That is to say, it would list all of the commands that would be submitted to the system, output the expansion of all filesets, etc.
When I've searched 'ant' and 'test', I get overwhelming hits for running tests with ant. Any suggestions on actually testing ant build files?
It seems, that you are looking for a "dry run".
I googled it a bit and found no evidence that this is supoorted.
Heres a bugzilla-request for that feature, that explains things a bit:
https://issues.apache.org/bugzilla/show_bug.cgi?id=35464
This is impossible in theory and in practice. In theory, you cannot test a program meaningfully without actually running it (basically the halting problem).
In practice, since individual ant tasks very often depend on each other's output, this would be quite pointless for the vast majority of Ant scripts. Most of them compile some source code and build JARs from the class files - but what would the fileset for the JAR contain if the compiler didn't actually run?
The proper way to test an Ant script is to run it regularly, but on a test system, possibly a VM image that you can restory to the original state easily.
Here's a problem: You have target #1 that builds a bunch of stuff, then target #2 that copies it.
You run your Ant script in test mode, it pretends to do target #1. Now it comes to target #2 and there's nothing to copy. What should target #2 return? Things can get even more confusing when you have if and unless clauses in your ant targets.
I know that Make has a command line parameter that tells it to run without doing a build, but I never found it all that useful. Maybe that's why Ant doesn't have one.
Ant does have a -k parameter to tell it to keep going if something failed. You might find that useful.
As Michael already said, that's what Test Systems - VM's come in handy- are for
From my ant bookmarks => some years ago some tool called "Virtual Ant" has been announced, i never tried it. So don't regard it as a tip but as something someone heard of
From what the site says =
"With Virtual Ant you no longer have to get your hands dirty with XML to create or edit Ant build scripts. Work in a completely virtualized environment similar to Windows Explorer and run your tasks on a Virtual File System to see what they do, in real time, without affecting your real file system*. The actual Ant build script is generated in the background."
Hm, sounds to good to be true ;-)
..without affecting your real file system.. might be what you asked for !?
They provide a 30day trial license so you won't lose no money but only the time to have a look on..

Static code analysis: integrate into debug and release builds, or just one or the other?

As a best practice, do you run code analysis on both debug and release builds, or just one or the other?
If for some reason the two builds are different (and they really shouldn't be for static analysis purposes), you should ensure that your metrics are running against what's actually going out to production.
Ideally, you should have a CI server, and the commands that developers run to initiate such analysis are no different from what the CI server does.
I usually pick one and that one is the release build. I guess it doesn't really matter but I tend to think that when gather information about what will run in production it is best to test exactly what will go to production (this goes for analysis, profiling, benchmarking, etc.).
Static Code Analysis will show the same results regardless of your build type.
Debug/Release only changes the resulting assembly and the inclusion or exclusion of debugging information at runtime.
I don't have separate ‘debug’ and ‘release’ builds (see Separate ‘debug’ and ‘release’ builds?).
The LLVM folks actually recommend analyzing the DEBUG configuraion:
ALWAYS analyze a project in its "debug" configuration
Most projects can be built in a "debug" mode that enables assertions.
Assertions are picked up by the static analyzer to prune infeasible
paths, which in some cases can greatly reduce the number of false
positives (bogus error reports) emitted by the tool.
In addition, debug builds tend to be faster (no need for optimization), and in the CI world faster is always better (all else being equal).