I have a Github Actions workflow that loops through a list of services and runs npm run test-compiled sequentially and takes 30 minutes to finish.
I am trying to see how I can improve the time taken to complete this step. One option I thought of was to create multiple jobs in the workflow and have each job run the test concurrently, however this always fails as files from the node install are missing.
Are there any recommendations on how to shorten the time needed for the workflow to complete?
Related
I have a CI job that ran last week:
Is there a way to find out exactly when it finished? I am trying to debug a problem that we just noticed, and knowing if the job finished at 9:00am or 9:06am or 6:23pm a week ago would be useful information.
The output from the job does not appear to indicate what time it started or stopped. When I asked Google, I got information about how to run jobs in serial or parallel or create CI jobs, but nothing about getting the time of the job.
For the future, I could put date into script or before_script, but that is not going to help with this job.
This is on a self-hosted gitlab instance. I am not sure of the version or what optional settings have been enabled.
What is the best practice to run 2 pipelines on the same project?
pipeline_1:
build jars jobs and run tests jobs
should be run each merge request.
pipeline_2:
build jars jobs and run e2e tests jobs
should be run every day.
Can I create 2 pipelines on the same project?
where one scheduled and second on each merge request and part of build jobs are common for both pipelines, but tests jobs are different.
Each "stage" in the .gitlab-ci.yml file is considered a pipeline, so this should just be a matter of adding the correct scripting for each stage.
On pipeline_2, you could set it to a pipeline schedule and make it dependent on the success of pipeline_1. That's what I would do.
Reference: https://docs.gitlab.com/ee/ci/parent_child_pipelines.html
I'm using a webhook to trigger my Gitlab pipeline. Sometimes, this trigger is triggered a bunch of times, but my pipelines only has to run the last one (static site generation). Right now, it will run as many pipelines as I have triggered. My pipelines takes 20 minutes so sometimes it's running the rest of the day, which is completely unnecessary.
https://docs.gitlab.com/ee/ci/yaml/#interruptible and https://docs.gitlab.com/ee/user/project/pipelines/settings.html#auto-cancel-pending-pipelines only work on pushed commits, not on triggers
A similar problem is discussed in gitlab-org/gitlab-foss issue 41560
Example of a use-case:
I want to always push the same Docker "image:tag", for example: "myapp:dev-CI". The idea is that "myapp:dev-CI" should always be the latest Docker image of the application that matches the HEAD of the develop branch.
However if 2 commits are pushed, then 2 pipelines are triggered and executed in paralell. Then the latest triggered pipeline often finishes before the oldest one.
As a consequence the pushed Docker image is not the latest one.
Proposition:
As a workaround for *nix you can get running pipelines from API and wait until they finished or cancel them with the same API.
In the example below script checks for running pipelines with lower id's for the same branch and sleeps.
jq package is required for this code to work.
Or:
Create a new runner instance
Configure it to run jobs marked as deploy with concurrency 1
Add the deploy tag to your CD job.
It's now impossible for two deploy jobs to run concurrently.
To guard against a situation where an older pipeline may run after a new one, add a check in your deploy job to exit if the current pipeline ID is less than the current deployment.
Slight modification:
For me, one slight change: I kept the global concurrency setting the same (8 runners on my machine so concurrency: 8).
But, I tagged one of the runners with deploy and added limit: 1 to its config.
I then updated my .gitlab-ci.yml to use the deploy tag in my deploy job.
Works perfectly: my code_tests job can run simultaneously on 7 runners but deploy is "single threaded" and any other deploy jobs go into pending state until that runner is freed up.
I've built a small vue project with 4 components and I want to build it to upload but it takes forever and building never completes.
I waited for 40 mins and building is not complete.
Here is a screenshot:
As was mentioned in comments building of application should finish in few seconds.
One of possible solution is to delete node_modules folder and install all deps again. It can help you.
Another possible solution it is to allocate more memory for task:
node --max_old_space_size=4096 node_modules/.bin/vue-cli-service build
This line will call node with increased size of memory (4GB) and will execute building task.
More about of how to serve and build application you can read here -
https://cli.vuejs.org/guide/cli-service.html#using-the-binary
I am having a build where in pre-compilation stage nuget restore is taking ~3 minutes to restore packages from cache and so does npm.
These two restoration from caches could run in parallel but I am not clear whether this is possible using the VSTS Phases.
Each phase may use different agents. You should not assume that the state from an earlier phase is available during subsequent phases.
What I would need is a way to pass the content of packages and node_modules directories from two different phases into a third one that invokes the compiler.
Is this possible with VSTS phases?
I wouldn't do this with phases. I'd consider not doing it at all. Restoring packages (regardless of type) is an I/O bound operation -- you're not likely to get much out of parallelizing it. In fact, it may be slower. The bulk of the time spent restoring packages is either waiting for a file to download, or copying files around on disk. Downloading twice as many files just takes twice as long. Copying two files at once takes double the time. That's roughly speaking, of course -- it may be a bit faster in some cases, but it's not likely to be significantly faster for the average case.
That said, you could write a script to spin off two separate jobs and wait for them to complete. Something like this, in PowerShell:
$dotnetRestoreJob = (Start-Job -ScriptBlock { dotnet restore } ).Id
$npmRestoreJob = (Start-Job -ScriptBlock { npm install } ).Id
do {
$jobStatus = Get-Job -Id #($dotnetRestoreJob, $npmRestoreJob)
$jobStatus
Start-Sleep -Seconds 1
}
while ($jobStatus | where { $_.State -eq 'Running' })
Of course, you'd probably want to capture the output from the jobs and check for whether there was a success exit code or a failure exit code, but that's the general idea.
A real problem here wasn't that VSTS hosted agent npm install and nuget restore could not have been run in parallel on a hosted agent. No.
A real problem was that hosted agent do not use nuget cache by design.
We have determined that this issue is not a bug. Hosted agent will
download nuget packages every time you queue a new build. You could
not speed this nuget restore step using a hosted agent.
https://developercommunity.visualstudio.com/content/problem/148357/nuget-restore-is-slow-on-hostedagent-2017.html
So a solution to take nuget restore time from 240s to 20s was to move it to a local agent. That way local cache do get used.