semgrep don't scan, report METRICS message - semgrep

I run semgrep with local yaml rule, but I got the below message which seem block the result, but no obvious error from semgrep:
the result is:
METRICS: Using configs from the Registry (like --config=p/ci) reports
pseudonymous rule metrics to semgrep.dev.
16:38:04 To disable Registry rule metrics, use "--metrics=off".
Running 1 rules...
16:38:04 ran 1 rules on 0 files: 0 findings
The environment is: Linux Docker + python3.6.8 + semgrep0.81.0
I already added "--metrics=off" in the command according its document, is it a bug? It seems this information blocked the scanning (ran 1 rules on 0 files), does anybody know the reason? thanks

Related

What is causing rules to fire repeatedly?

I have created a local test environment using minikube to test custom falco rules.
The goal is to search for keywords in the namespace and pod names and set an Info priority on them so they can be filtered out in Kibana.
The following are the custom macros and rules that I have written:
- macro: ns_contains_whitelist_terms
condition: k8s.ns.name = monitoring or k8s.ns.name = jenkins
- macro: pod_name_contains_whitelist_terms
condition: >
(k8s.pod.name startswith meseeks or
k8s.pod.name startswith jenkins or
k8s.pod.name startswith wazuh)
- rule: priority_whitelist_ns_alert
desc: add an Info priority to the monitoring and jenkins namespaces
condition: ns_contains_whitelist_terms
output: "Namespace is jenkins or monitoring findme1"
priority: INFO
tag: whitelist
- rule: priority_whitelist_pod_name_alert
desc: add an Info priority to pods that start with wazuh, jenkins or meseeks
condition: pod_name_contains_whitelist_terms
output: "Pod name starts with wazuh, jenkins or meseeks findme2"
priority: INFO
tag: whitelist
I have created namespaces and pods to test the rules, and they are firing when I expect them to (when falco starts up, when I spawn shells or interact with the pods, for example).
However, the alerts are firing repeatedly, sometimes hundreds at a time, so that the output when I grep the logs looks something like this sample.
Out of curiosity, I took line counts of the different rule alerts when different events occurred and noted that they are not the same. See the below table:
Event
Namespace rule fired #:
Pod name rule fired #:
Startup
6
4
Spawn shell
106
55
apt update
943
23
install wget
84
26
The only two reasons that I can think of that these rules would be triggered so many times is
I have written the rules incorrectly, or
There are events taking place in the background (not directly triggered by me) that are causing the rules to fire repeatedly.
I believe 2 is the more likely, but would appreciate anyone who is able to confirm that the rules I have written look alright or has any other insights.

GitLab Runner fails to upload artifacts with "invalid argument" error

I'm completely new to trying to implement GitLab's CI/CD pipelines, but it's been going quite well. In fact, for my ASP.NET project, if I specify a Publish Profile in the msbuild command that uses Web Deploy, it actually deploys the code successfully to the web server.
However, I'm now wanting to have the "build" job create artifacts which are uploaded to GitLab that I can then subsequently deploy. We're using a self-hosted instance of GitLab, for which I'm not an admin, but I can speak to the admin if I know what I'm asking for!
So I've configured my gitlab-ci.yml file like this:
variables:
NUGET_PATH: 'C:\Program Files\Nuget\Nuget.exe'
NUGET_SOURCES: 'https://api.nuget.org/v3/index.json'
MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Current\Bin\msbuild.exe'
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- '& "$env:NUGET_PATH" restore ApplicationTemplate.sln -Source "$env:NUGET_SOURCES"'
- '& "$env:MSBUILD_PATH" ApplicationTemplate\ApplicationTemplate.csproj /p:DeployOnBuild=true /p:Configuration=Release /p:PublishProfile=FolderPublish.pubxml'
artifacts:
paths:
- '.\ApplicationTemplate\bin\Release\Publish\'
The output shows that this builds the code just fine, and it also seems to successfully find the artifacts for upload. However, when it uploads the artifacts, even though the request gets a 200 OK response, the process fails. Here is the log output:
So, it finds the artifacts, it attempts to upload them and even gets a 200 OK response (in contrast to the handful of similar reports of this error I've been able to find online), but it still fails due to an invalid argument.
I've already enabled verbose debugging, as you can see from the output, but I'm none the wiser. Looking at the GitLab Runner entries in the Windows Event Log on the box where the runner is hosted doesn't shed any light on things either. The total size of the artifacts is 61.1MB, so I don't think my issue is related to that.
Can anyone see from this output what's invalid? Can I identify which argument is invalid and/or why it's invalid?
Edit: Things I've tried
Specifying a value for artifacts:expire_in.
Setting artifacts:public to FALSE, since I'm using a self-hosted GitLab environment and the default value for this setting (TRUE) is not valid in such an environment.
Trying every format I can think of for the value of the artifacts:paths setting (this seems to be incredibly robust - regardless of the format I use, the Runner seems to have no problem parsing it and finding the files to upload).
Taking a cue from this question I created a new project with a very simple build job to upload a single file:
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- echo "Test" > test.txt
artifacts:
paths:
- test.txt
About 50% of the time this job hangs on the uploading of the artifacts and I have to cancel it. The other half of the time it fails in exactly the same way as the my previous project:
After countless hours working on this, it seems that ultimately the issue was that our internal Web Application Firewall was blocking some part of the transfer of artefacts to the server, or the response back from it. With the WAF reconfigured not to block traffic from the machine running the GitLab Runner, the artefacts are successfully uploaded and the job succeeds.
This would have been significantly easier to diagnose if the logging from GitLab was better. As per my comment on this issue, it should be possible to see the content of the response from the GitLab server after uploading artefacts, even when the response code is 200.
What's strange - and made diagnosing the issue even harder - is that when I worked through the issue with the admin of our GitLab instance, digging through logs and running it in debug mode, the artefact upload process was uploading something successfully. We could see, for example, the GitLab Runner's log had been uploaded to the server. Clearly the WAF's blocking was selective and didn't block everything in both directions.

Gitlab-CI: AWS S3 deploy is failing

I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.

Snakemake MissingOutputException latency-wait ignored

I am attempting to run some picard tools metrics collection in snakemake. A --dryrun works fine with no errors. When I actually run the snake file I receive an MissingOutputException for reasons I do not understand.
First here is my rule
rule CollectAlignmentSummaryMetrics:
input:
"bam_input/final/{sample}/{sample}.ready.bam"
output:
"bam_input/final/{sample}/metrics/{reference}/alignment_summary.metrics"
params:
reference=config['reference']['file'],
memory="10240m"
run:
"java -Xmx{params.memory} -jar $HOME/software/picard/build/libs/picard.jar CollectAlignmentSummaryMetrics R={params.reference} I={input} O={output}"
Now the error.
snakemake --latency-wait 120 -s metrics.snake -p
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
38 CollectAlignmentSummaryMetrics
1 all
39
rule CollectAlignmentSummaryMetrics:
input: bam_input/final/TB5173-T14/TB5173-T14.ready.bam
output: bam_input/final/TB5173-T14/metrics/GRCh37/alignment_summary.metrics
jobid: 7
wildcards: reference=GRCh37, sample=TB5173-T14
Error in job CollectAlignmentSummaryMetrics while creating output file bam_input/final/TB5173-T14/metrics/GRCh37/alignment_summary.metrics.
MissingOutputException in line 21 of/home/bwubb/projects/PD1WES/metrics.snake:
Missing files after 5 seconds:
bam_input/final/TB5173-T14/metrics/GRCh37/alignment_summary.metrics
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Exiting because a job execution failed. Look above for error message
Will exit after finishing currently running jobs.
Exiting because a job execution failed. Look above for error message
The --latency-wait is completely ignored. I have even tried bumping it up to 84600. If I am to run the intended picard java command, it executes no problem. Ive made several snakemake pipelines without any mysterious issues, so this is driving me quite mad. Thank you for any insight!
thanks for reporting.
It is a bug that latency-wait is not propagated when using the run directive. I have fixed that in the master branch.
In your rule, you use the run directive. After run, Snakemake expects plain Python code. You simply provide a string. This means that Python will simply initialize the String and then exit. What you really want here is to use the shell directive. See here. By using the shell directive, your current problem will be fixed, and you should not be affected by the bug. There is also no need to modify latency-wait. Anyway, the fix for the latency-wait bug will occur in the next release for Snakemake.

MSBuild SonarQube Runner v1.0 returns with code 1 after "Generating the FxCop ruleset"

I'm trying out SonarQube using the new MSBuild SonarQube Runner v1.0. If I install a fresh SonarQube server locally, the following command works fine, and I can build my solution directly afterward, call the 'end' command, and have the results published in SonarQube:
MSBuild.SonarQube.Runner.exe begin /key:TestKey /name:TestName /version:1.0.0.0
However, if I run this against an existing SonarQube server that exists on the internal network, it always returns with exit code 1:
15:32:40 Creating config and output folders...
15:32:40 Creating directory: c:\Test\MSBuild.SonarQube.Runner-1.0.itsonar\.sonarqube\conf
15:32:40 Creating directory: c:\Test\MSBuild.SonarQube.Runner-1.0.itsonar\.sonarqube\out
15:32:41 Generating the FxCop ruleset: c:\Test\MSBuild.SonarQube.Runner-1.0.itsonar\.sonarqube\conf\SonarQubeFxCop-cs.ruleset
Process returned exit code 1
It seems to download a lot of the dependencies into /.sonarqube, so communication with the server isn't an issue
Things I've tried:
checked the access.log, server.log and event logs
upgraded the existing server to v5.1.2 (clean install using the guide)
upgraded the sonar-csharp-plugin to v4.1
right-clicked all .jar files on the server and ensured they are unblocked
tried the runner directly on the server
(ongoing) tried debugging the source code (happening somewhere in the pre-process step: success comes back as true, but the error code is 1)
disabled UAC on the server an rebooted
re-installed JRE on both server and client, ensure JAVA_HOME in both PATH and registry are set correctly
Any help or pointers greatly accepted. I've been stuck on this for 2 days and can't think of anything else to try except continue trawling through source code. Thank you.
This is a tricky one! Looking at the code, I see only one path that can yield this output:
It fails while generating the FxCop ruleset for C#, as the VB.NET FxCop ruleset message is not logged - see TeamBuildPreProcessor.cs#L149 and TeamBuildPreProcessor.cs#L185
The GenerateFxCopRuleset() call for C# threw a WebException, leading to the call of Utilities.HandleHostUrlWebException() - which has to return true for the exception to be silently swallowed - see Utilities.cs#L153
The only path returning true without logging any message is if a HttpStatusCode.NotFound was received - see Utilities.cs#L158
The control flow goes back to FetchArgumentsAndRulesets(), which returns false, then goes back to Execute() which returns false as well - see TeamBuildPreProcessor.cs#L106
The MSBuild SonarQube Runner "begin" phase (called "preprocessor" in the code) fails - see Program.cs#L42
So, at some point, some SonarQube web service required for the C# FxCop ruleset generation is return a HTTP 404 error.
Could you monitor your network traffic and listen for the failing HTTP call? [I will keep on updating this answer afterwards]
EDIT: Indeed the error is caused by the quality profile name containing special characters. Such characters are currently badly URL-escaped, which leads to a 404.
I've created the following ticket to fix this issue in the upcoming release: http://jira.sonarsource.com/browse/SONARMSBRU-125