modify remote agent configuration in Bamboo - bamboo

I have a newly built vm machine which I am configuring as Bamboo remote agent. I can do it successfully. But the display name in Bamboo shows the agent name in caps (PERF8) whereas I want it as perf8
I did the following:
1. Stop Bamboo service in Agent.
2. Delete agent in Bamboo
3. Modify the bamboo-agent.cfg to perf8
4. Start service
5. Approve Bamboo service in Agent
But it only leaves a duplicate remote agent in Bamboo (PERF8, perf8(2)) instead of modifying the existing one.
Since its a vmware, changing the name in Bamboo Agent Edit details is going to be washed out when I revert the agent. Could someone please help me fix it please?

1. Start Bamboo service in Agent.
2. Modify the bamboo-agent.cfg to perf8
4. Stop service
5. Start Bamboo service in Agent

Related

How to restart a Service Fabric Application

I have a gMSA service account running a stateless Service Fabric application. The account has recently been added as a member to a new security group. We don't see that the application is working and I think its because the user claims were loaded on application start up. I've seen that to get this to work on Windows Services that we need to restart the service (mmc->Services, right click restart). I would like to do something similar in Service Fabric.
I see the option of restarting the node, but that is a more heavy handed approach than I want to use. This is in production and I want to scope the solution to the problem. The other applications on the node do not have an issue so I would prefer to not bring them down.
Service Fabric Deactivate (pause) vs Deactivate (restart)?
Thanks in advance,
Greg
What you are looking for is the Restart-ServiceFabricDeployedCodePackage command.
The Restart-ServiceFabricDeployedCodePackage cmdlet ends the code package process, which restarts all of the user service replicas hosted in that process. This restart simulates code package process failures in the cluster, which tests the failover recovery paths of your service.
You can specify a code package, or you can specify a ReplicaSelector to restart the node and code package combination where the replica is hosted. This simplifies tests on the primary host node by not having to determine which Service Fabric node is the primary node before restarting that node.

Bamboo remote agent pool

In Jenkins, we can define labels and group number of build slaves under the label. This label then can be mapped to job so jenkins will automatically pick the available build slaves in the pool and execute the jobs. Is something similar available in bamboo to create remote agent pool?
I hope I understood your question correctly but anyway... There's similar concept in Bamboo. There are two types of agents:
Local ones which operate as a thread in Bamboo server. Generally, not recommended for bigger Bamboo instances due to performance and security reasons.
Remote ones which are basically separate processes running the builds, ideally on a different machine so Bamboo server doesn't suffer from higher hardware load.
The match between job and agents bases on job requirement and agent capabilities, e.g:
Agent define a capability, effectively states what it can build, what tools are installed, e.g. .NET or JDK
Job/deployment environment define a requirement which is need to successfully accomplish the task, e.g. Git and Maven.
In the end Bamboo tries to find an agent which provides full set of capabilities a job/deployment environment requires.
The special rules applies if an agent is dedicated to an job or environment or agent is elastic agent (runs in EC2).
More reading:
https://confluence.atlassian.com/bamkb/difference-between-local-agents-and-remote-agents-457703602.html
https://confluence.atlassian.com/bamboo/configuring-a-job-s-requirements-289277064.html
https://confluence.atlassian.com/bamboo/configuring-a-job-s-requirements-289277064.html
https://confluence.atlassian.com/bamboo/requirements-for-deployment-environments-838427584.html
https://confluence.atlassian.com/bamboo/dedicating-an-agent-629015108.html
https://confluence.atlassian.com/bamboo/managing-your-elastic-image-configurations-289277147.html

Push code from VSTS repository to on-prem TFS?

this is my first post on here so forgive me if I've missed an existing answer to this question.
Basically my company conducts off-site development for various clients in government. Internally, we use cloud VSTS, Octopus deploy and Selenium to ensure a continuous delivery pipeline in our internal Azure environments. We are looking to extend this pipeline into the on-prem environments of our clients to cut down on unnecessary deployment overheads. Unfortunately, due to security policies we are unable to use our VSTS/Octopus instances to push code directly into the client environment, so I'm looking for a way to get code from our VSTS environment into an on-prem instance of TFS hosted on their end.
What I'm after, really, is a system whereby the client logs into our VSTS environment, validates the code, then pushes some kind of button which will pull it to their local TFS, where a replica of our automated build and test process will manage the CI pipeline through their environments and into prod.
Is this at all possible? What are my options here?
There is not a direct way to achieve migrating source code with history from VSTS to a on-premise TFS. You would need 3rd party tool, like Commercial Edition of OpsHub (note it is not free).
It sounds like you need a new feature that is comming to Octopus Deploy, see https://octopus.com/blog/roadmap-2017 --> Octopus Release Promotions
I quote:
Many customers work in environment where releases must flow between more than one Octopus server - the two most common scenarios being:
Agencies which use one Octopus for dev/test, but then need an Octopus server at each of their customer's sites to do production deployments
I will suggest the following. Though it contains small custom script.
Add build agent to your vsts which is physically located on customer's premises. This is easy, just register agent with online endpoint.
Create build definition in vsts that gets code from vsts. But instead of building commits it to local tfs. You will need a small powershell code here. You can add it as custom powershell step in build definition.
Local tfs orchestrates the rest.
Custom code:
Say your agent is on d:/agent
1. Keep local tfs mapped to some directory (say c:/tfs)
The script copies new sources over some code from d:/agent/work/ to c:/tfs
Commits from c:/tfs to local tfs.
Note:You will need /force option (and probably some more) to prevent conflicts.
I believe this not as ugly as it sounds.

How to detach Jprofiler8 'jpenable' remote agent

Since I need to profile the application runs in remote machine where GUI is not allowed. I started remote session profiling with JProfiler8 and ran /bin/jpenable agent in remote host. After the successful analysis I need to stop that remote jpenable jprofiler8 agent. How can I do that?
To make sure previously started agent is still in running state or not, I ran the /bin/jpenable agent again. Now I don't see previously binded JVM. So i assume it already bind with previous agent.
Unfortunately, it is not possible to unload a JVMTI profiling agent. The JVM only unloads agents when it shuts down.

How to halt provisioning of one VM until another VM is done?

Using Vagrant+Chef Solo I'm setting up two VMs: #1 is a TeamCity server, #2 is a TeamCity agent. Provisioning is done by first installing the TeamCity server package on VM #1, then the agent VM is booted and requests data from the server which is used to install the agent. That whole thing works fine.
But now I want to alter the server after the agent is done provisioning. I want to modify the server's database directly, to change an attribute that is only available after the agent has spun up. But is there a way for one VM's provisioning to trigger another VM? Once the agent is done I'd like to somehow resume provisioning the server, so I can make the database edit..
Any thoughts, recommendations, or feedback welcomed. I'm new to Vagrant, Chef, and TeamCity, so there's a chance I'm missing a much easier solution.
* Why do I want to edit the DB directly you may be wondering? TeamCity agents must be authorized before they can be used, and I want to do this programmatically. The solution I've found is to directly edit the DB, because authorization functionality is not exposed via the TeamCity REST API (as far as I can tell)
If you can test the agent is installed/answering, you may add a ruby block looping over this test before continuing the recipe execution.
This loop should have a sleep and a counter to avoid infinite loops.
I've no knowledge of teamcity, so can't tell if it's the best way.
In general, Chef is designed to manage your system, not simply provision it (though this is less true in the modern Cloud world with "golden image" strategies). Nonetheless, in your case, you best bet is to just setup chef-client as a service that runs every 15 minutes. Once the client has finished provisioning, the next run of the server will be able to authorize it.
If you really want to "trigger" the one from the other, you'd need either do that externally with something like etcd or consul, or you would need to setup an ssh keypair between the boxes and add a ruby_block on the client that either does the database modification directly, or calls chef-client on the server.