GitLab Runner fails to upload artifacts with "invalid argument" error - gitlab-ci

I'm completely new to trying to implement GitLab's CI/CD pipelines, but it's been going quite well. In fact, for my ASP.NET project, if I specify a Publish Profile in the msbuild command that uses Web Deploy, it actually deploys the code successfully to the web server.
However, I'm now wanting to have the "build" job create artifacts which are uploaded to GitLab that I can then subsequently deploy. We're using a self-hosted instance of GitLab, for which I'm not an admin, but I can speak to the admin if I know what I'm asking for!
So I've configured my gitlab-ci.yml file like this:
variables:
NUGET_PATH: 'C:\Program Files\Nuget\Nuget.exe'
NUGET_SOURCES: 'https://api.nuget.org/v3/index.json'
MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Current\Bin\msbuild.exe'
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- '& "$env:NUGET_PATH" restore ApplicationTemplate.sln -Source "$env:NUGET_SOURCES"'
- '& "$env:MSBUILD_PATH" ApplicationTemplate\ApplicationTemplate.csproj /p:DeployOnBuild=true /p:Configuration=Release /p:PublishProfile=FolderPublish.pubxml'
artifacts:
paths:
- '.\ApplicationTemplate\bin\Release\Publish\'
The output shows that this builds the code just fine, and it also seems to successfully find the artifacts for upload. However, when it uploads the artifacts, even though the request gets a 200 OK response, the process fails. Here is the log output:
So, it finds the artifacts, it attempts to upload them and even gets a 200 OK response (in contrast to the handful of similar reports of this error I've been able to find online), but it still fails due to an invalid argument.
I've already enabled verbose debugging, as you can see from the output, but I'm none the wiser. Looking at the GitLab Runner entries in the Windows Event Log on the box where the runner is hosted doesn't shed any light on things either. The total size of the artifacts is 61.1MB, so I don't think my issue is related to that.
Can anyone see from this output what's invalid? Can I identify which argument is invalid and/or why it's invalid?
Edit: Things I've tried
Specifying a value for artifacts:expire_in.
Setting artifacts:public to FALSE, since I'm using a self-hosted GitLab environment and the default value for this setting (TRUE) is not valid in such an environment.
Trying every format I can think of for the value of the artifacts:paths setting (this seems to be incredibly robust - regardless of the format I use, the Runner seems to have no problem parsing it and finding the files to upload).
Taking a cue from this question I created a new project with a very simple build job to upload a single file:
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- echo "Test" > test.txt
artifacts:
paths:
- test.txt
About 50% of the time this job hangs on the uploading of the artifacts and I have to cancel it. The other half of the time it fails in exactly the same way as the my previous project:

After countless hours working on this, it seems that ultimately the issue was that our internal Web Application Firewall was blocking some part of the transfer of artefacts to the server, or the response back from it. With the WAF reconfigured not to block traffic from the machine running the GitLab Runner, the artefacts are successfully uploaded and the job succeeds.
This would have been significantly easier to diagnose if the logging from GitLab was better. As per my comment on this issue, it should be possible to see the content of the response from the GitLab server after uploading artefacts, even when the response code is 200.
What's strange - and made diagnosing the issue even harder - is that when I worked through the issue with the admin of our GitLab instance, digging through logs and running it in debug mode, the artefact upload process was uploading something successfully. We could see, for example, the GitLab Runner's log had been uploaded to the server. Clearly the WAF's blocking was selective and didn't block everything in both directions.

Related

DBT Docs Throwing Service Unavailable Error

I am not sure why this file cannot be found.
I am using Dagster to run the DBT Docs Generate command, upload those files to S3 so that they can be deployed via CI/CD to an ngnix server.
The documentation will sometimes load after a fresh deploy, but quickly will throw the above error. I can confirm, though, that the debug and docs generate CLI commands produce no error(s), and all the files (including the catalog.json) are present on the ngninx server.
I am not sure how to debug this weird issue. Thanks!
I tried running the docs generate command with the --no-compile flag, but it produces the same result.

Why do I care which gitlab runner runs my pipeline

I have a gitlab-ci.yml file which I have inherited. And I have a local gitlab server running on my laptop. I have managed to create several gitlab runners and kickoff this inherited pipeline -- which gets immediately stuck. The error I am getting is:
...because you dont have any active runners online or available with any of these tags assigned to them: sometag
I have pieced together that the gitlab-ci.yml file references several tags and if there is a runner with a given tag, the runner will pickup my pipeline --- but why do I need this control (or hassle, more like it). What does it matter which runner runs my pipeline? Do i need to closely examine the gitlab-ci.yml file and based on that make some special runner for it ?
After I have modified my runners and gave them the missing tags, I am still getting the same error. Looking at the runner API results, the results do show that where it says "online" it shows "null". What does it mean? How do I make this runner "online"
There may be several runner, which will have different executors set up and thus, have different functionalities. So, the best practice is to give tags in gitlab-ci.yml file to run the jobs on particular runner.
In order to bring your runner online, you can go in the server where that particular gitlab runner is installed, and in the restart the gitlab-runner service using gitlab-runner restart with the user you have installed the runner or root user.
Sometimes, it might happen that you have changed the tags or added some tags to the runner using Gitlab UI but the same tags has not been saved in config.toml file. (config.toml file stores the gitlab-runner configurations. More details here https://docs.gitlab.com/runner/configuration/advanced-configuration.html)
So, in this particular case, you have to go the server where the gitlab-runner is installed, and modify the tags in config.toml file and then restart the gitlab-runner service. If everything goes well, you can see the runner is online in Gitlab UI.

Gitlab-CI: AWS S3 deploy is failing

I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.

gitlab: Is there a way to http access an artifact during a job, rather than after?

I'm trying to run a pipeline with two stages. The first stage creates a zip file and the second stage executes a http curl POST for that file. If the curl succeeds, the pipeline is completed.
The problem is that gitlab only exposes the zip file AFTER the pipeline has completed - which means the zip file from the previous pipeline gets sent instead.
I've tried using artifacts and dependencies, but it seems the http url is only exposed for completed pipelines. I tried using the url of the specific job that executed the build stage, but it didn't work either.
Does anyone know how to access an artifact by URL, before pipeline completion?
i was unable to find a way to access the artifact remotely before the pipeline had completed. sad face.
i have a workaround though - i moved the deploy stage to a separate pipeline. so the first pipeline just executes the build (generating the artifacts) and then triggers the second pipeline. the second pipeline is then able to access the artifacts of the first pipeline and it just executes a deploy stage. happy face.

MSBuild SonarQube Runner v1.0 returns with code 1 after "Generating the FxCop ruleset"

I'm trying out SonarQube using the new MSBuild SonarQube Runner v1.0. If I install a fresh SonarQube server locally, the following command works fine, and I can build my solution directly afterward, call the 'end' command, and have the results published in SonarQube:
MSBuild.SonarQube.Runner.exe begin /key:TestKey /name:TestName /version:1.0.0.0
However, if I run this against an existing SonarQube server that exists on the internal network, it always returns with exit code 1:
15:32:40 Creating config and output folders...
15:32:40 Creating directory: c:\Test\MSBuild.SonarQube.Runner-1.0.itsonar\.sonarqube\conf
15:32:40 Creating directory: c:\Test\MSBuild.SonarQube.Runner-1.0.itsonar\.sonarqube\out
15:32:41 Generating the FxCop ruleset: c:\Test\MSBuild.SonarQube.Runner-1.0.itsonar\.sonarqube\conf\SonarQubeFxCop-cs.ruleset
Process returned exit code 1
It seems to download a lot of the dependencies into /.sonarqube, so communication with the server isn't an issue
Things I've tried:
checked the access.log, server.log and event logs
upgraded the existing server to v5.1.2 (clean install using the guide)
upgraded the sonar-csharp-plugin to v4.1
right-clicked all .jar files on the server and ensured they are unblocked
tried the runner directly on the server
(ongoing) tried debugging the source code (happening somewhere in the pre-process step: success comes back as true, but the error code is 1)
disabled UAC on the server an rebooted
re-installed JRE on both server and client, ensure JAVA_HOME in both PATH and registry are set correctly
Any help or pointers greatly accepted. I've been stuck on this for 2 days and can't think of anything else to try except continue trawling through source code. Thank you.
This is a tricky one! Looking at the code, I see only one path that can yield this output:
It fails while generating the FxCop ruleset for C#, as the VB.NET FxCop ruleset message is not logged - see TeamBuildPreProcessor.cs#L149 and TeamBuildPreProcessor.cs#L185
The GenerateFxCopRuleset() call for C# threw a WebException, leading to the call of Utilities.HandleHostUrlWebException() - which has to return true for the exception to be silently swallowed - see Utilities.cs#L153
The only path returning true without logging any message is if a HttpStatusCode.NotFound was received - see Utilities.cs#L158
The control flow goes back to FetchArgumentsAndRulesets(), which returns false, then goes back to Execute() which returns false as well - see TeamBuildPreProcessor.cs#L106
The MSBuild SonarQube Runner "begin" phase (called "preprocessor" in the code) fails - see Program.cs#L42
So, at some point, some SonarQube web service required for the C# FxCop ruleset generation is return a HTTP 404 error.
Could you monitor your network traffic and listen for the failing HTTP call? [I will keep on updating this answer afterwards]
EDIT: Indeed the error is caused by the quality profile name containing special characters. Such characters are currently badly URL-escaped, which leads to a 404.
I've created the following ticket to fix this issue in the upcoming release: http://jira.sonarsource.com/browse/SONARMSBRU-125