Display screenshots on test failure in azure devops pipeline - testing

I would like to display a screenshot when a testcafe test is failed in azure pipeline, I can't use anything from marketplace, so it needs to be with copy files task (as recommended by my colleague).
How should I do that?

We cannot show the screenshot in the pipeline, as a workaround, we could show the report.xml content in the pipeline via Logging Commands.
Since the logging command need use local file, we need configure self-hosted agent and copy the report.xml file to another path, then show the report.xml content in the pipeline.
Add the task power shell after the task copy file and enter the below script
Copy-Item -Path $(build.artifactstagingdirectory)\report.html -Destination {local folder path} –Recurse
Write-Host "##vso[task.addattachment type=Distributedtask.Core.Summary;name=testcafeFailLog;]{local folder path}\report.html"
Result:
I did a simple test, you could check it

Related

GitLab Runner fails to upload artifacts with "invalid argument" error

I'm completely new to trying to implement GitLab's CI/CD pipelines, but it's been going quite well. In fact, for my ASP.NET project, if I specify a Publish Profile in the msbuild command that uses Web Deploy, it actually deploys the code successfully to the web server.
However, I'm now wanting to have the "build" job create artifacts which are uploaded to GitLab that I can then subsequently deploy. We're using a self-hosted instance of GitLab, for which I'm not an admin, but I can speak to the admin if I know what I'm asking for!
So I've configured my gitlab-ci.yml file like this:
variables:
NUGET_PATH: 'C:\Program Files\Nuget\Nuget.exe'
NUGET_SOURCES: 'https://api.nuget.org/v3/index.json'
MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Current\Bin\msbuild.exe'
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- '& "$env:NUGET_PATH" restore ApplicationTemplate.sln -Source "$env:NUGET_SOURCES"'
- '& "$env:MSBUILD_PATH" ApplicationTemplate\ApplicationTemplate.csproj /p:DeployOnBuild=true /p:Configuration=Release /p:PublishProfile=FolderPublish.pubxml'
artifacts:
paths:
- '.\ApplicationTemplate\bin\Release\Publish\'
The output shows that this builds the code just fine, and it also seems to successfully find the artifacts for upload. However, when it uploads the artifacts, even though the request gets a 200 OK response, the process fails. Here is the log output:
So, it finds the artifacts, it attempts to upload them and even gets a 200 OK response (in contrast to the handful of similar reports of this error I've been able to find online), but it still fails due to an invalid argument.
I've already enabled verbose debugging, as you can see from the output, but I'm none the wiser. Looking at the GitLab Runner entries in the Windows Event Log on the box where the runner is hosted doesn't shed any light on things either. The total size of the artifacts is 61.1MB, so I don't think my issue is related to that.
Can anyone see from this output what's invalid? Can I identify which argument is invalid and/or why it's invalid?
Edit: Things I've tried
Specifying a value for artifacts:expire_in.
Setting artifacts:public to FALSE, since I'm using a self-hosted GitLab environment and the default value for this setting (TRUE) is not valid in such an environment.
Trying every format I can think of for the value of the artifacts:paths setting (this seems to be incredibly robust - regardless of the format I use, the Runner seems to have no problem parsing it and finding the files to upload).
Taking a cue from this question I created a new project with a very simple build job to upload a single file:
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- echo "Test" > test.txt
artifacts:
paths:
- test.txt
About 50% of the time this job hangs on the uploading of the artifacts and I have to cancel it. The other half of the time it fails in exactly the same way as the my previous project:
After countless hours working on this, it seems that ultimately the issue was that our internal Web Application Firewall was blocking some part of the transfer of artefacts to the server, or the response back from it. With the WAF reconfigured not to block traffic from the machine running the GitLab Runner, the artefacts are successfully uploaded and the job succeeds.
This would have been significantly easier to diagnose if the logging from GitLab was better. As per my comment on this issue, it should be possible to see the content of the response from the GitLab server after uploading artefacts, even when the response code is 200.
What's strange - and made diagnosing the issue even harder - is that when I worked through the issue with the admin of our GitLab instance, digging through logs and running it in debug mode, the artefact upload process was uploading something successfully. We could see, for example, the GitLab Runner's log had been uploaded to the server. Clearly the WAF's blocking was selective and didn't block everything in both directions.

Access denied error using vsts

I'm trying to create a wcf service in a lab using vsts.
I have created a build definition that works using a msbuild task. It then uses robocopy to copy the relevant dlls to a remote directory inside a lab using the Publish Artifacts step.
However, I need the content to be created as a windows service, and started after it has been published. It seems like something is running since I see a created log file about 9 minutes after a successful publish, but i cannot see my service inside the services menu, or in IIS.
When I try to run a bat script (using the run script step) that does an sc create, I get an access denied error even though on the vsts build definition I have given the step permission to modify the environment.
This is the full error:
2018-05-17T13:00:13.7702615Z ##[section]Starting: Run script GloBill/InstallBackEnd.bat
2018-05-17T13:00:13.7705444Z ==============================================================================
2018-05-17T13:00:13.7705561Z Task : Batch Script
2018-05-17T13:00:13.7705655Z Description : Run a windows cmd or bat script and optionally allow it to change the environment
2018-05-17T13:00:13.7705748Z Version : 1.1.3
2018-05-17T13:00:13.7705824Z Author : Microsoft Corporation
2018-05-17T13:00:13.7705924Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613733)
2018-05-17T13:00:13.7706023Z ==============================================================================
2018-05-17T13:00:13.7775377Z ##[command]C:\agent\_work\1\s\GloBill\InstallBackEnd.bat
2018-05-17T13:00:13.8030595Z
2018-05-17T13:00:13.8031049Z C:\agent\_work\1\s>sc create GloBillBackEnd ../Services/GloBill.WS.exe
2018-05-17T13:00:13.8048684Z [SC] OpenSCManager FAILED 5:
2018-05-17T13:00:13.8048781Z
2018-05-17T13:00:13.8048901Z Access is denied.
2018-05-17T13:00:13.8048957Z
2018-05-17T13:00:13.8064609Z ##[error]Process completed with exit code 5.
2018-05-17T13:00:13.8073202Z ##[section]Finishing: Run script GloBill/InstallBackEnd.bat
I'm running out of ideas.
The problem was that I was trying to deploy a release from a hosted agent that resided on another machine.
I had to configure a new agent solely for deploying, then i had to tweak my installation script a little by adding the -executionpolicy bypass command.
Here is the new script:
(%1 is the file path)
Powershell.exe -executionpolicy bypass -File %1 -username Username -password ****** -exepath *exe* -serviceName *svcName*

How do I download one or more files from a stopped Fargate task?

I have an ECS task that runs some test cases. I have it running in Fargate. Yay!
Now I want to download the test results file(s) from the container. I have the task and container IDs handy. I can find the exit code with
aws ecs describe-tasks --cluster Fargate --tasks <my-task-id>
How do I download the log and/or files produced?
It looks like, as of right now, the only way to get test results off of my server is to send the results to S3 before the container shuts down.
From this thread, there's no way to mount a volume / EFS onto a Fargate container.
Here's my bash script for running my tests (in build.sh) and then uploading the results to S3:
#!/bin/bash
echo Running tests...
pushd ~circleci/project/
export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY
commandToRun="~/project/.circleci/build_scripts/build.sh";
# Run the command
eval $commandToRun 2>&1 | tee /tmp/build.log
# Get the exit code
exitCode=$?
aws s3 cp /tmp/build-$FEATURE.log s3://$CICD_BUCKET/build.log \
--storage-class REDUCED_REDUNDANCY \
--region us-east-1
exit ${exitCode}
Of course, you'll have to pass in the AWS_ACCESS_KEY, AWS_SECRET_KEY and CICD_BUCKET environment variables. The bucket name you choose needs to be pre-created, but any directory structure below it does NOT need to be created in advance.
You probably want to look at using CodeBuild for this use case, which can automatically copy artifacts to S3.
It's actually quite easy to orchestrate the following using a simple bash script and the AWS CLI:
Idempotently Create/Update a CodeBuild project (using a simple CloudFormation template you can define in your source repository)
Run a Codebuild job that executes a given revision of your source repository (using again a self-defining buildspec.yml specification defined in your source repository)
Attach to the CloudWatch logs log group for your CodeBuild job and stream log output
Finally detect when the job has completed successfully or not and then download any artifacts locally using S3
I use this approach to run builds in CodeBuild, with Bamboo as the overarching continuous delivery system.

JMeter Test Results Monitoring/ Analysis

I want to start load testing by running JMeter from command line for more accurate test results, but how can I monitor the run and then analyze the results after the test finishes.
You can generate JTL (JMeter results) file while executing the JMX (JMeter script) file from command line. A sample command for generating JTL file will look like this..
jmeter -n -t path-to-jmeterScript.jmx -l path-to-jtlFile.jtl
After completion of script execution you can open the JMeter GUI and simply open the JTL file in any listener (as per your requirement).
Most of the listeners in JMeter have an option to save the results into a file. This file contains usually not the report itself, but the samples which are generated by the tests. If you define this filename, you can generate the reports using these saved files. For example see http://jmeter.apache.org/usermanual/component_reference.html#Summary_Report .
If you run JMeter in command-line non-GUI mode passing results file name via -l parameter it will output results there. After test finishes you will be able to open the file with the Listener of your choice and perform the analysis.
By default JMeter writes results in chunks, if you need to monitor them in real time add the following line to user.properties file (lives under /bin folder of your JMeter installation)
jmeter.save.saveservice.autoflush=true
You can use other properties which names start with jmeter.save.saveservice.* to control what metrics you need to store. The list with default values can be seen in jmeter.properties file. See Apache JMeter Properties Customization Guide for more information on various JMeter properties types and ways of working with them.
You can also consider running your JMeter test via Taurus tool - it provides some statistics as the test goes either in console mode or via web interface.

Jenkins: Publish over SSH after failed build

I am trying to use the Publish Over SSH plugin to publish many kinds of build artifact to an external server. Examples of build artifacts are compiled builds, XML output from testing, and JSON output from linting.
If testing or linting results in errors, the build will fail or be marked unstable. In the case of a failed build, the Publish Over SSH plugin will not copy the build artifacts, writing to the console:
SSH: Current build result is [FAILURE], not going to run.
I see no reason why I wouldn't want to publish this information if it exists, and I would like to continue to report errors as build failures. So, is there any way to force Jenkins to publish build artifacts even if the job is marked as a failure?
I thought I could use the Flexible Publish to force this, by wrapping the Publish Over SSH in an "always" condition, but this gave the same output as before on a build failure.
I can think of a couple of work-arounds:
a) store the build status in an environment variable; force the status to SUCCESS; perform the publish step; recover the build status from the environment variable using java jenkins-cli.jar set-build-status $STORED_STATUS
OR
b) Write a bash script to perform the publishing step manually using SSH, cutting out the Publish Over SSH plugin altogether
Before I push forward with either of these solutions (neither of which I like), is there any piece of configuration that I'm missing?
The solution I ended up using was to use rsync/ssh to copy the files manually using a post build script. I configured this in my Jenkins Job Builder YAML like so:
- publisher:
name: publish-to-archive
publishers:
- post-tasks:
- matches:
- log-text: ".*"
script: |
ssh -i ${{HOME}}/.ssh/id_rsa jenkins#archiver "mkdir -p {archive_path}"
rsync -Pravdtze "ssh -i ${{HOME}}/.ssh/id_rsa" {source_path} jenkins#archiver:{archive_path}
Quoting old hooky on jenkinsci-users:
How can I force Publish Over SSH to work even if the build has been marked
a failure?
Use "Send files or execute commands over SSH after the build runs" in
configuration section "Build environment"
Job configuration / Build Environment / Send files or execute commands over SSH after the build runs
instead of using a post-build or build-step.