Why does artifact publish take a long time in Bamboo? - bamboo

We are using Bamboo to build our code, create artifacts, and deploy.
Problem Scenario
I have a plan that has a stage with 3 jobs (dev/test/prod). The jobs build the code and publish a 16-20Mb Artifact as a shared artifact. When I run this plan, the publish takes 8-9 minutes for all 3 jobs. The publish is happening at approximately the same timestamp for all 3 jobs.
Here is an example log statement:
simple 10-Sep-2021 13:46:15 Publishing an artifact: Preview Artifact
simple 10-Sep-2021 13:55:09 Finished publishing of artifact Required shared artifact: [Preview Artifact], pattern: [**/Artifact.*.zip] in 8.897 min
I went onto the build server (Windows Server 2012) and viewed the artifact file in the work directory and in the artifacts directory. They are indeed almost 9 minutes apart with file timestamps.
This is very consistent. I can view many previous builds and it is consistently taking 8 or 9 minutes.
Fixed Scenario
I just edited the plan and disabled 2 of the jobs. Now the artifact publish step is taking a mere number of seconds:
27-Sep-2021 15:20:19 Publishing an artifact: Preview Artifact
27-Sep-2021 15:20:56 Finished publishing of artifact Required shared artifact: [Preview Artifact], pattern: [**/Artifact.*.zip] in 37.06 s
Questions
Why is the artifact publish so slow when I run concurrent jobs? What is bamboo doing during the publish job step that could take so long?
I have 20 other build plans (that do not use concurrent jobs) in which the artifact copy takes less than a minute. I have never seen this problem with any of these other plans.
I don't see anything special in the documentation, nor can I find a problem like this when I search Google and Stack Overflow. I need the artifact to be shared because I use it in a Deployment project.
EDIT:
Now that I think of it, 37 seconds is way too long as well. I just copied the file manually and it took about a second. Why is it taking so long even without concurrent jobs?

Related

How to interrupt triggered gitlab pipelines

I'm using a webhook to trigger my Gitlab pipeline. Sometimes, this trigger is triggered a bunch of times, but my pipelines only has to run the last one (static site generation). Right now, it will run as many pipelines as I have triggered. My pipelines takes 20 minutes so sometimes it's running the rest of the day, which is completely unnecessary.
https://docs.gitlab.com/ee/ci/yaml/#interruptible and https://docs.gitlab.com/ee/user/project/pipelines/settings.html#auto-cancel-pending-pipelines only work on pushed commits, not on triggers
A similar problem is discussed in gitlab-org/gitlab-foss issue 41560
Example of a use-case:
I want to always push the same Docker "image:tag", for example: "myapp:dev-CI". The idea is that "myapp:dev-CI" should always be the latest Docker image of the application that matches the HEAD of the develop branch.
However if 2 commits are pushed, then 2 pipelines are triggered and executed in paralell. Then the latest triggered pipeline often finishes before the oldest one.
As a consequence the pushed Docker image is not the latest one.
Proposition:
As a workaround for *nix you can get running pipelines from API and wait until they finished or cancel them with the same API.
In the example below script checks for running pipelines with lower id's for the same branch and sleeps.
jq package is required for this code to work.
Or:
Create a new runner instance
Configure it to run jobs marked as deploy with concurrency 1
Add the deploy tag to your CD job.
It's now impossible for two deploy jobs to run concurrently.
To guard against a situation where an older pipeline may run after a new one, add a check in your deploy job to exit if the current pipeline ID is less than the current deployment.
Slight modification:
For me, one slight change: I kept the global concurrency setting the same (8 runners on my machine so concurrency: 8).
But, I tagged one of the runners with deploy and added limit: 1 to its config.
I then updated my .gitlab-ci.yml to use the deploy tag in my deploy job.
Works perfectly: my code_tests job can run simultaneously on 7 runners but deploy is "single threaded" and any other deploy jobs go into pending state until that runner is freed up.

Problem: building my vue project takes forever

I've built a small vue project with 4 components and I want to build it to upload but it takes forever and building never completes.
I waited for 40 mins and building is not complete.
Here is a screenshot:
As was mentioned in comments building of application should finish in few seconds.
One of possible solution is to delete node_modules folder and install all deps again. It can help you.
Another possible solution it is to allocate more memory for task:
node --max_old_space_size=4096 node_modules/.bin/vue-cli-service build
This line will call node with increased size of memory (4GB) and will execute building task.
More about of how to serve and build application you can read here -
https://cli.vuejs.org/guide/cli-service.html#using-the-binary

VSTS multi-phased builds - run nuget restore and npm install in parallel

I am having a build where in pre-compilation stage nuget restore is taking ~3 minutes to restore packages from cache and so does npm.
These two restoration from caches could run in parallel but I am not clear whether this is possible using the VSTS Phases.
Each phase may use different agents. You should not assume that the state from an earlier phase is available during subsequent phases.
What I would need is a way to pass the content of packages and node_modules directories from two different phases into a third one that invokes the compiler.
Is this possible with VSTS phases?
I wouldn't do this with phases. I'd consider not doing it at all. Restoring packages (regardless of type) is an I/O bound operation -- you're not likely to get much out of parallelizing it. In fact, it may be slower. The bulk of the time spent restoring packages is either waiting for a file to download, or copying files around on disk. Downloading twice as many files just takes twice as long. Copying two files at once takes double the time. That's roughly speaking, of course -- it may be a bit faster in some cases, but it's not likely to be significantly faster for the average case.
That said, you could write a script to spin off two separate jobs and wait for them to complete. Something like this, in PowerShell:
$dotnetRestoreJob = (Start-Job -ScriptBlock { dotnet restore } ).Id
$npmRestoreJob = (Start-Job -ScriptBlock { npm install } ).Id
do {
$jobStatus = Get-Job -Id #($dotnetRestoreJob, $npmRestoreJob)
$jobStatus
Start-Sleep -Seconds 1
}
while ($jobStatus | where { $_.State -eq 'Running' })
Of course, you'd probably want to capture the output from the jobs and check for whether there was a success exit code or a failure exit code, but that's the general idea.
A real problem here wasn't that VSTS hosted agent npm install and nuget restore could not have been run in parallel on a hosted agent. No.
A real problem was that hosted agent do not use nuget cache by design.
We have determined that this issue is not a bug. Hosted agent will
download nuget packages every time you queue a new build. You could
not speed this nuget restore step using a hosted agent.
https://developercommunity.visualstudio.com/content/problem/148357/nuget-restore-is-slow-on-hostedagent-2017.html
So a solution to take nuget restore time from 240s to 20s was to move it to a local agent. That way local cache do get used.

Whether Drone support configuring timeout value for build

I setup a local drone server for our CI. And our project is a java project managed by maven. When run the mvn clean install command, maven will download all dependencies into ~/.m2 directory. The first time run this command will download a huge of data from maven remote repository which may take a very long time. In this case, I got below error on drone CI.
ERROR: terminal inactive for 15m0s, build cancelled
I understand that this message means there is no output on the console for 15 minutes. But it is a normal case in my build environment. I wander whether I can configure the 15m to be a larger value so I can build our project.
My previous answer is outdated. You can now change the default timeout for each individual repository from the repository settings screen. This setting is only available to system administrators.
You can increase the terminal inactive timeout by passing DRONE_TIMEOUT=<duration> to each of your agents.
docker run -e DRONE_TIMEOUT=15m drone/drone:0.5 agent
The timeout value may be any valid Go duration string [1].
# 30 minute timeout
DRONE_TIMEOUT=30m
# 1 hour timeout
DRONE_TIMEOUT=1h
# 1 hour, 30 minute timeout
DRONE_TIMEOUT=1h30m
[1] https://golang.org/pkg/time/#ParseDuration
Looking at the drone source code, it looks like they have environment variables DRONE_TIMEOUT and DRONE_TIMEOUT_INACTIVITY that you can use to configure the inactivity timeout. I tried putting it in my .drone.yml file and it didn't seem to do anything, so this may only be available at a higher level.
Here is the reference to the environment variable DRONE_TIMEOUT_INACTIVITY:
https://github.com/drone/drone/blob/17e5eb50363f3fcdc0a0461162bee93041d600b7/drone/exec.go#L62
Here is the reference to the environment variable DRONE_TIMEOUT:
https://github.com/drone/drone/blob/eee4fd1fd2556ac9e4115c746ce785c7364a6f12/drone/agent/agent.go#L95
Here is where the error is thrown:
https://github.com/drone/drone/blob/5abb5de44aa11ea546db1d3846d603eacef7f0d9/agent/agent.go#L206

Your total DEV#Cloud disk usage is over your subscription's quota

I have one jenkins job.
My first configuration stores the last 60 builds.
After 32 builds I get following message:
Build execution is suspended due to the following reason(s):
Your total DEV#Cloud disk usage is over your subscription's quota. Your subscription Free allows 2 GB, but you are using 2052 MB across all services (Forge and Jenkins). To fix this, you can either upgrade your subscription or delete some data in your Forge repositories, Jenkins workspaces or build artifacts.
Ok, the build artefacts are to big.
Now I configured the jenkins job to store 60builds and only 3 artefacts.
Where can I find the (old) build artefacts?
Where can I delete them?
You can manually delete build artifacts by deleting builds. This can be achieved by selecting a build from build history and then deleting it with the "delete this build" link. This is quite cumbersome, so a better solution is to go to build config and do the following: check the "discard old builds" checkbox, click "Advanced" button, put a suitable value to either the "days to keep artifacts" or "max # of builds to keep with artifacts".
You could also install the disk usage plugin, which gives you information on how much space your jobs are taking.
Here's a wiki article about managing disk usage on DEV#cloud.