Bamboo remote agent pool - bamboo

In Jenkins, we can define labels and group number of build slaves under the label. This label then can be mapped to job so jenkins will automatically pick the available build slaves in the pool and execute the jobs. Is something similar available in bamboo to create remote agent pool?

I hope I understood your question correctly but anyway... There's similar concept in Bamboo. There are two types of agents:
Local ones which operate as a thread in Bamboo server. Generally, not recommended for bigger Bamboo instances due to performance and security reasons.
Remote ones which are basically separate processes running the builds, ideally on a different machine so Bamboo server doesn't suffer from higher hardware load.
The match between job and agents bases on job requirement and agent capabilities, e.g:
Agent define a capability, effectively states what it can build, what tools are installed, e.g. .NET or JDK
Job/deployment environment define a requirement which is need to successfully accomplish the task, e.g. Git and Maven.
In the end Bamboo tries to find an agent which provides full set of capabilities a job/deployment environment requires.
The special rules applies if an agent is dedicated to an job or environment or agent is elastic agent (runs in EC2).
More reading:
https://confluence.atlassian.com/bamkb/difference-between-local-agents-and-remote-agents-457703602.html
https://confluence.atlassian.com/bamboo/configuring-a-job-s-requirements-289277064.html
https://confluence.atlassian.com/bamboo/configuring-a-job-s-requirements-289277064.html
https://confluence.atlassian.com/bamboo/requirements-for-deployment-environments-838427584.html
https://confluence.atlassian.com/bamboo/dedicating-an-agent-629015108.html
https://confluence.atlassian.com/bamboo/managing-your-elastic-image-configurations-289277147.html

Related

How to build a development and production environment in apache nifi

I have 2 apache nifi servers that are development and production hosted on AWS, currently the migration between development and production is done manually. I would like to know if it is possible to automate this process and ensure that people do not develop in production?
I thought about uploading the entire nifi in github and having it deploy the new nifi on the production server, but I don't know if that would be correct to do.
One option is to use NiFi registry, store the flows in the registry and share the registry between Development and Production environments. You can then promote the latest version of the flow from dev to prod.
As you say, another option is to potentially use Git to share the flow.xml.gz between environments and using a deploy script. The flow.xml.gz stores the data flow configuration/canvas. You can use parameterized flows (https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Parameters) to point NiFi at different external dev/prod services (eg. NiFi dev processor uses a dev database URL, NiFi prod points to prod database URL).
One more option is to export all or part of the NiFi flow as a template, and upload the template to your production NiFi, however registry is probably a better way of handling this. More info on templates here: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#templates.
I believe the original design plan behind NiFi was not necessarily to have different environments, and to allow live changes in production. I guess you would build your initial data flow using some test data in production and then once it's ready start the live data flow. But I think it's reasonable to want to have separate environments.

Push code from VSTS repository to on-prem TFS?

this is my first post on here so forgive me if I've missed an existing answer to this question.
Basically my company conducts off-site development for various clients in government. Internally, we use cloud VSTS, Octopus deploy and Selenium to ensure a continuous delivery pipeline in our internal Azure environments. We are looking to extend this pipeline into the on-prem environments of our clients to cut down on unnecessary deployment overheads. Unfortunately, due to security policies we are unable to use our VSTS/Octopus instances to push code directly into the client environment, so I'm looking for a way to get code from our VSTS environment into an on-prem instance of TFS hosted on their end.
What I'm after, really, is a system whereby the client logs into our VSTS environment, validates the code, then pushes some kind of button which will pull it to their local TFS, where a replica of our automated build and test process will manage the CI pipeline through their environments and into prod.
Is this at all possible? What are my options here?
There is not a direct way to achieve migrating source code with history from VSTS to a on-premise TFS. You would need 3rd party tool, like Commercial Edition of OpsHub (note it is not free).
It sounds like you need a new feature that is comming to Octopus Deploy, see https://octopus.com/blog/roadmap-2017 --> Octopus Release Promotions
I quote:
Many customers work in environment where releases must flow between more than one Octopus server - the two most common scenarios being:
Agencies which use one Octopus for dev/test, but then need an Octopus server at each of their customer's sites to do production deployments
I will suggest the following. Though it contains small custom script.
Add build agent to your vsts which is physically located on customer's premises. This is easy, just register agent with online endpoint.
Create build definition in vsts that gets code from vsts. But instead of building commits it to local tfs. You will need a small powershell code here. You can add it as custom powershell step in build definition.
Local tfs orchestrates the rest.
Custom code:
Say your agent is on d:/agent
1. Keep local tfs mapped to some directory (say c:/tfs)
The script copies new sources over some code from d:/agent/work/ to c:/tfs
Commits from c:/tfs to local tfs.
Note:You will need /force option (and probably some more) to prevent conflicts.
I believe this not as ugly as it sounds.

DC/OS running a service on each agent

Is there any way of running a service (single instance) on each deployed agent node? I need that because each agent needs to mount a storage from S3 using s3fs
The name of the feature you're looking for is "daemon tasks", but unfortunately, it's still in the planning phase for Mesos itself.
Due to the fact that schedulers don't know the entire state of the cluster, Mesos needs to add a feature to enable this functionality. Once in Mesos it can be integrated with DC/OS.
The primary workaround is to use Marathon to deploy an app with the UNIQUE constraint ("constraints": [["hostname", "UNIQUE"]]) and set the app instances to the number of agent nodes. Unfortunately this means you have to adjust the instances number when you add new nodes.

Multiple Mobilefirst-Server artifacts concurrent deploy

I use a batch procedure for deploying MFP v7 artifacts (wlapps and adapters).
The procedure is based on the standard ant tasks defined in worklight-ant-deployer.jar.
The MFP environment runs onto a WAS cell, and consists of a single AdminService application managing multiple WLRuntimes.
Is it possible to run two (or more) deploy tasks concurrently against different WLRuntime targets ?
Furthermore, sticking to a single WLRuntime, is it possible to deploy different multiple artifacts concurrently ?
Thanks in advance for any answer/comment.
Ciao, Stefano.
For a single WL runtime, all deployments are internally done sequentially. You can start the deployments concurrently, but internally only one deployment is done after the other, due to a transaction locking mechanism. If you start too many deployments in parallel, it may come to timeout situations, even though this is seldom. By default, a deployment transaction waits for 20 minutes before it may time out.
Note: starting deployments in parallel means here using ant tasks or the wladm tool or the REST service directly. In the MobileFirst Admin Console UI, you will see deploy buttons disabled when another deployment transaction is ongoing, hence in the UI, it is not so easily possible to start deployments in parallel. The UI tries to prohibit that.
Note 2: the 20 minutes that I mentioned above is for the locking mechanism itself. Ant/wladm has its own parameters for time out that may be lower, hence in ant tasks, you might get time outs quicker than 20 min. See here.
For multiple WL runtimes, deployments can be concurrently. The mentioned locking mechanism is per runtime, hence deployments that occur in one WL runtime will not influence any other WL runtime.

How to halt provisioning of one VM until another VM is done?

Using Vagrant+Chef Solo I'm setting up two VMs: #1 is a TeamCity server, #2 is a TeamCity agent. Provisioning is done by first installing the TeamCity server package on VM #1, then the agent VM is booted and requests data from the server which is used to install the agent. That whole thing works fine.
But now I want to alter the server after the agent is done provisioning. I want to modify the server's database directly, to change an attribute that is only available after the agent has spun up. But is there a way for one VM's provisioning to trigger another VM? Once the agent is done I'd like to somehow resume provisioning the server, so I can make the database edit..
Any thoughts, recommendations, or feedback welcomed. I'm new to Vagrant, Chef, and TeamCity, so there's a chance I'm missing a much easier solution.
* Why do I want to edit the DB directly you may be wondering? TeamCity agents must be authorized before they can be used, and I want to do this programmatically. The solution I've found is to directly edit the DB, because authorization functionality is not exposed via the TeamCity REST API (as far as I can tell)
If you can test the agent is installed/answering, you may add a ruby block looping over this test before continuing the recipe execution.
This loop should have a sleep and a counter to avoid infinite loops.
I've no knowledge of teamcity, so can't tell if it's the best way.
In general, Chef is designed to manage your system, not simply provision it (though this is less true in the modern Cloud world with "golden image" strategies). Nonetheless, in your case, you best bet is to just setup chef-client as a service that runs every 15 minutes. Once the client has finished provisioning, the next run of the server will be able to authorize it.
If you really want to "trigger" the one from the other, you'd need either do that externally with something like etcd or consul, or you would need to setup an ssh keypair between the boxes and add a ruby_block on the client that either does the database modification directly, or calls chef-client on the server.