Azure CI Pipeline YAML for Specflow Integration - selenium

I am trying to configure my spec flow project with the Azure CI pipeline. When I try to create Specflow+LivingDoc with TestExecution.json, the pipeline is unable to find the path. Attaching my YAML, and specflow.json along with this. Can anybody help me with this??
YAML
- task: SpecFlowPlus#0
displayName: 'Upload SpecFlow Living Docs'
inputs:
projectFilePath: 'MyProjecct'
projectName: 'MyProjecct'
testExecutionJson: '**\TestExecution.json'
projectLanguage: 'en'
specflow.json
{
"livingDocGenerator": {
"enabled": true,
"filePath": "{CurrentDirectory}\\TestResults\\TestExecution.json"
}
}
Error
Error
##[error]Error: Command failed: dotnet D:\a_tasks\SpecFlowPlus_32f3fe66-8bfc-476e-8e2c-9b4b59432ffa\0.6.859\CLI\LivingDoc.CLI.dll feature-folder "D:\a\1\s\MyProjecct" --output-type JSON --test-execution-json "**/TestExecution.json" --output "D:\a\1\s\16707\FeatureData.json" --project-name "MyProjecct" --project-language "en"

Before the SpecFlowPlus task you can add a shell task (such as Bask) to execute the ls command to see if the TestExecution.json file has been generated or existing in the specified Feature folder under current directory.
ls -R
If the TestExecution.json file is not existing, you should go to check if there is any issue on the step that generates this file.

I got the same error when I tried to configure this a few days back.
Try giving the full path to your TestExecution.json, that should work. The pattern matching is not working in the file paths so provide full path to your json files as well as project/test assembly etc...

Related

Cake Running Nunit 3 using DotNetCoreTest Is Failing

Task("Nunit")
.Does(() => {
DotNetCoreTest("D:\Workspace\Proj14\test\test\WebAutomation\NUnit\UserManageLMTWS.cs", );
});
Currently have by build.cake defining the task as such as listed in the https://cakebuild.net/api/Cake.Common.Tools.DotNetCore/DotNetCoreAliases/8191BBC4 example. I don't have any settings, as I guess I'm building into Debug instead of Release, but I am getting the following errors when running the task:
Error: Bootstrapping failed for 'D:/Workspace/Proj14/test/test/Nunit'.
Nunit, line #0: Could not find script 'D:/Workspace/Proj14/test/test/Nunit'.
Do I need to pass NUnit into the DotNetCoreTest method somehow?
DotNetCoreTest is meant to test projects and not individual files.
So you specify either the <PROJECT> | <SOLUTION> | <DIRECTORY> file i.e.
DotNetCoreTest("./test/Project.Tests/", settings);
DotNetCoreTest("./test/My.sln", settings);
DotNetCoreTest("./test/Project.Tests/Project.Tests.csproj", settings);
DotNetCoreTest calls to the .NET SDK CLI command dotnet test.
The error you're receiving though is because the path to the specified script is wrong / doesn't exist. There's no file called D:/Workspace/Proj14/test/test/Nunit which is why error message says Could not find script 'D:/Workspace/Proj14/test/test/Nunit'

Debugging terragrunt dependency block resulting in s3 permission error

I'm trying to use a dependency block for the first time, but get aws s3 list object permission denied issues and have trouble debugging the issue.
The setup is as follows, using an s3 backend for storing terraform state:
A git repo containing the terraform modules:
archive
s3_inventory
Instantiations of the above:
prod/eu/archive/terragrunt.hcl:
terraform {
source = "git::ssh://git#my_server//archive?ref=v1.0.0"
}
include {
path = find_in_parent_folders()
}
dependency "s3-inventory" {
config_path = "../s3-inventory/"
}
prod/eu/s3_inventory/terragrunt.hcl:
terraform {
source = "git::ssh://git#my_server//s3_inventory?ref=v1.0.0"
}
include {
path = find_in_parent_folders()
}
Running terragrunt apply in prod/eu/archive works just fine when I remove the dependency block from the hcl file. It fails when I add the dependency block in.
Running terragrunt output -json in prod/eu/s3-inventory also works just fine.
With debugging flags on I still don't seem to get enough info as to why it's failing.
terragrunt apply --terragrunt-log-level debug --terragrunt-debug in prod/eu/archive results in something like this:
...<omitted>...
DEBU[0000] Detected module /Users/tim.kersten/prod/eu/s3-inventory/terragrunt.hcl is already init-ed. Retrieving outputs directly from working directory. prefix=[/Users/tim.kersten/prod/eu/s3-inventory]
DEBU[0000] Running command: terraform output -json prefix=[/Users/tim.kersten/prod/eu/s3-inventory]
Failed to load state: AccessDenied: Access Denied
status code: 403, request id: ABC123DEF456GHI, host id: WW91J3JlIHRlcnJpYmx5IG5vc2UgZm9yIHRyeWluZyB0byBsb29rIGF0IG15IGhvc3QK
ERRO[0003] exit status 1
Something is clearly different, but the debugging options I set on terragrunt don't seem to give me enough info to understand what's different.
Anyone understand what's going on here?
Edit:
terragrunt version: 0.28.6

"amplify publish" fails to deploy without any detailed error stack trace

We are using AWS amplify to develop our next.js application for the first time and trying manual deployment process. We are getting following error when we try to run "amplify publish".
This error is frustrating because there is no stack trace to figure out what is causing the issue. I can see the artifacts were successfully loaded to S3 bucket. But deployment fails.
Error:
Export successful
✔ Zipping artifacts completed.
✖ Deployment failed! Please report an issue on the Amplify Console GitHub issue tracker at https://github.com/aws-amplify/amplify-console/issues.
An error occurred during the publish operation
I tried to manually upload the zipped file using the "drag and drop". It feels like its stuck with the message Your build is being queued.... for hours now.
Any help is highly appreciated. This is a huge blocker for us.
I fixed this issue.
first run this command
amplify configure project
after set 'out' to Distribution Directory Path like this.
Distribution Directory Path: out
configurations.png
Try to put static content on S3 manually.
For example, if you have generated static contents in the dist/ directory using nuxt generate, run the following command:
aws s3 sync dist/ s3://{YOUR_BUCKET_NAME}
If this solves your problem then try to use CodeCommit for your amplify project until this gets fixed.
GitHub issue: https://github.com/aws-amplify/amplify-console/issues/1369
We resolved the above issue. The issue was with the IAM policies. Once we fixed the role policies we were able to publish and see the progress of the deployment.
https://docs.amplify.aws/guides/hosting/nextjs/q/platform/js/
1) SSG Only:
Inside package.json
"scripts": {
"build": "next build && next export",
...
}
Run this command.
amplify configure project
Then, keep every thing the way it is, except this:
Distribution Directory Path: out
2) SSG & SSR:
Inside package.json
"scripts": {
"build": "next build",
...
}
Run this command.
amplify configure project
Then, keep every thing the way it is, except this:
Distribution Directory Path: .next
Note:
Do not forget to change the Image in index.js, because currently AWS does not support next/image.
https://docs.amplify.aws/guides/hosting/nextjs/q/platform/js/#deploy-and-host-an-ssg-only-app

plugin not found error in node.js command prompt

I have configured cumulocity.json as below:
{
"name": "Cumulocityexercises",
"availability": "PRIVATE",
"contextPath": "cumulocityexercises",
"key": "cumulocityexercises-appkey",
"resourcesUrl": "/",
"type": "HOSTED",
"tabsHorizontal": true,
"imports": [
"core/c8yBranding",
"cumulocityexercises/myplugin",
"cumulocityexercises/docsplugin"
]
}
but when I am trying to build the plugin:my plugin, I am getting an error like plugin not found. Can anyone help me with this please?
This is most likely linked to your project structure. It should look similar to the screenshot below and then you would need to run the command from the root level (cumulocity-enhanced-ui in the screenshot).
You need to run the following command to build a single plugin
c8y build:plugin <<pluginFolderName>>
c8y build:plugin dashboardUtils
Same goes for the manifest declarations. They need to match the plugin folder names (case sensitive)
What's exactly the command you are using to build the plugin?
If it is something like this:
$ c8y build:plugin docsplugin
docsplugin plugin not found
then you may check that your plugin directory has the same name as specified in the JSON file, i.e. cumulocity.json manifest file in the main app directory. A second manifest file goes in the plugin directory.
Note that you must execute the build command from the main app directory which in your case is cumulocityexercises, otherwise you will get the same error message.

Generate HTML report for WebdriverIO/Cucumber framework

I am using WebdriverIO/Cucumber (wdio-cucumber-framework) for my test automation. I want to get the test execution result in a HTML file. As of now I am using Spec Reporter (wdio-spec-reporter). Which helps to print the results in console window. But I want all the execution reports in a HTML file.
How can I get WebdriverIO test execution result in a HTML file?
Thanks.
OK, finally got some spare time to tackle your question #Thangakumar D. WebdriverIO reporting is a vast subject (there are multiple ways to generate such a report), so I'll go ahead and start with my favorite reporter: Allure!
Allure Reporter:
[Preface: make sure you're in your project root]
Install your package (if you haven't already): npm install wdio-allure-reporter --save-dev
Install Allure CommandLine (you'll see why later): npm install -g allure-commandline --save-dev
Setup your wdio.config.js file to support Allure as a reporter
wdio.config.js:
reporters: ['allure', 'dot', 'spec', 'json'],
reporterOptions: {
outputDir: './wdio-logs/',
allure: {
outputDir: './allure-reports/allure/'
}
}
Run your tests! Notice that, once your regression ends, your /allure-results/ folder has been populated with multiple .json, .txt, .png (if you have screenshot errors), and .xml files. The cotent of this folder is going to be used by Allure CommandLine to render you HTML report.
Go to your /allure-results/ folder and generate the report via: allure generate <reportsFolderPath> (do it like this allure generate .
If you want your /allure-reports/ folder inside /allure-results/)
Now go into your /allure-reports folder and ope index.html into your browser of choice (use Firefox for starters)
Note: The generated index.html file won't have all the content loaded on Chrome unless you do some tweaks. It's due to default WebKit not being able to load all the AJAX calls required. Read more about it here.
If you're successfully completed all the previous steps, it should look something like this:
Hope this helped. Cheers!
Note: I'll try to UPDATE this post when I get some more time with other awesome ways to generate reports from your WebdriverIO reporter logs, especially if this post gets some love/upvotes along the way.
e.g.: Another combo that I enjoy using being: wdio-json-reporter/wdio-junit-reporter coupled with a easy-to-use templating language, Jinja2.
I have been using Mochawesome reporter and it looks beautiful, check it out
here.
Mochawesome reporter generates the mochoawesome.json which then can be used to create a beautiful report using Mochawesome report generator
Installation:
> npm install --save wdio-mochawesome-reporter
> npm install --save mochawesome-report-generator#2.3.2
It is easier to integrate by adding this line in the wdio.conf.js:
// sample wdio.conf.js
module.exports = {
// ...
reporters: ['dot', 'mochawesome'],
reporterOptions: {
outputDir: './', //mochawesome.json file will be written to this directory
},
// ...
};
Add the script to package.json:
"scripts": {
"generateMochawesome": "marge path/to/mochawesome.json --reportTitle 'My project results'"
},