I have installed Devstack(Rocky version) and installed Yardstick on my PC.
When I want to test Storperf(Storage Performance) manually and see the result. what are all the step by step procedure that I have to follow?
I have gone through the below link but not sure how to configure and test Storperf. https://wiki.opnfv.org/display/storperf/Storperf
Related
First, important note: This is a project in which the Selenium/Cucumber test suite is working. It's working locally for the developers, and it's working in the Tekton environment.
Second: I have an identical IntelliJ environment as the developers. And other Selenium/Cucumber projects are running fine on my local machine.
When a try to run a fresh copy of the project in question locally, none of the tests start. And, there is no error message.
Normally, this will be cause by missing Glue parameters in the Run/Debug configuration, since IntelliJ for some reason often is unable to add these automatically. This is not the case here.
The test runner when I try to start a specific scenario:
Sorry for the language. The tests are all in Norwegian. But I don't think that matters for seeing the problem.
Normally, when there is an actual error regarding the test step - in the case "Gitt jeg er en vanlig søker" - there would be some info to the right of the steps overview. This is now blank, as can be seen. So, the test stes aren't even started.
The run/debug configuration:
e2e.cucumber.felles is the package where the Java file with the first step definition is located. So, that's not the reason.
Any ideas?
First of all, my terminology may be incorrect with some of these terms as I am very new to jest so if that is the case I would love to be corrected to help me learn.
In case I am using the jest terminology incorrectly, here is what I mean:
Test Suite - The entire group of test files I am attempting to run
Test File - The actual .js test file that is being run
Test - The individual 'it' code blocks in each test file.
Currently, I am using a group of around 20 jest tests to test my API EPs for my SQL Server and its corresponding linked server.
To do this, I run an npm command like so in the terminal.
npm run test:file ape/linked -- --env=monke.env
With how jest currently is working, if one of tests in the 20 test files fails, then it quits out of the test suite entirely.
I would like it to just fail out of whatever test file it is in, then continue to the next text file.
I know jest currently has the --bail flag, but enabling this continues the same test file on failure which I can't have happen due to the nature of my linked server to my actual SQL server.
Any help would be greatly appreciated and I am new to all of this so let me know if more info needs to be included.
This will be running on various mac versions as well as Ubunutu server
I am using the Karate framework to do the API testing. As part of CI efforts, we send an email at the end of test execution listing the summary of test results. There is a need to include the screeshot of the test execution counts from 'overview-feature.html' file.
I did so through the TestRunner.java file - launched Chrome using Chrome.start() and then using it to take screenshot. It all works well locally on Windows.
However when executing on CI server which is a Unix box, the chrome executable is not present in the default location (usr/bin/google-chrome) and hence the connection for the localhost fails.
Is there a way we can change the default location of the chrome executable?
PS: Apologies if this was too trivial to be asked.
Yes Chrome on CI is hard to get right, refer: https://stackoverflow.com/a/62325328/143475 - note that CI boxes typically are "headless" a browser may not be even installed.
I think the best thing for you is to ZIP the HTML and send it. But I really think you need to work with some CI experts, because the report generation and e-mailing business is normally done by things like Jenkins. What you are doing is certainly not normal or best-practice.
If you really want, there is a Karate Docker container that can give you a proper Chrome instance (see docs) but that is overkill for what you need.
EDIT: The Chrome Java API allows for customization of the executable path and this is in the docs: https://github.com/intuit/karate/tree/master/karate-core#chrome-java-api
It should be something like this:
Chrome.start("/opt/blah/chrome");
As I am implementing an automated way to GUI test our webapplication with selenium I ran into some issues.
I am using selenese-runner to execute our Selenium test suites, created with Selenium IDE as a post build action in Jenkins.
This works perfeclty fine, as the build fails when something is wrong, and the build succeeds if all tests are passed. And the results are stored on a per build basis as HTML files, generated be selenese-runner.
My problem is however, that I seem to be unable to find a way, how to display these results in the respective jenkins build.
Does anyone have an idea how to solve this issue. Or maybe I am on the wrong path at all?
Your help is highly appreciated!
I believe the JUnit plugin should do what you want, but it doesn't work for me.
My config uses this shell script to run the tests (you can see the names of all my test suites):
/usr/bin/Xvfb &
export DISPLAY=localhost:0.0
cd ${WORKSPACE}
java -jar ./test/selenium/bin/selenese-runner.jar --baseurl http://${testenvironment} --screenshot-on-fail ./seleniumResults/ --html-result ./seleniumResults/ ./test/selenium/Search_TestSuite.html ./test/selenium/Admin_RegisteredUser_Suite.html ./test/selenium/Admin_InternalUser_Suite.html ./test/selenium/PortfolioAgency_Suite.html ./test/selenium/FOAdmin_Suite.html ./test/selenium/PublicWebsite_Suite.html ./test/selenium/SystemAdmin_Content_Suite.html ./test/selenium/SystemAdmin_MetaData_Suite.html
killall Xvfb
And I can see the result of the most recent test (you can see the name of my jenkins task folder)
http://<JENKINS.MY.COMPANY>/job/seleniumRegressionTest/ws/seleniumResults/index.html
Earlier tests are all saved on the Jenkins server, so I can view them if I need to.
I'm running TeamCity 6.5 on a Windows Server, with a couple of build agents on the same server (all running as the system user as services). I had been building SilverLight projects and running the StatLight (v 1.4.4147) tests previously under Jenkins with no problems. On Jenkins, I called the StatLight test in a custom script as follows:
StatLight.exe -x="Tests.xap"
StatLight.exe -x="MoreTests.xap"
StatLight.exe -x="EvenMoreTests.xap"
... etc., but when I migrated my build jobs to TeamCity, I also changed these into a single command line step as follows:
StatLight.exe --teamcity -x="Tests.xap" -x="MoreTests.xap" -x="EvenMoreTests.xap"
This works about 50% of the time, but when it fails, there's no output in the build log to tell me why - I just get:
[11:41:18]: [MyProject\bin\Release\MoreTests.xap] Tests.ExtensionsTests.WatchObservableCollection
[11:41:18]: [MyProject\bin\Release\MoreTests.xap] Tests.SubscribingModelBaseTests.DisposeIsCalled
[11:41:18]: [MyProject\bin\Release\MoreTests.xap] --- Completed Test Run at: 28/09/2011 11:41:18. Total Run Time: 00:00:11.8125000
[11:41:19]: [MyProject\bin\Release\MoreTests.xap] Test run results: Total 6, Successful 6, Failed 0,
[11:41:19]: [Step 5/6] MyProject\bin\Release\EvenMoreTests.xap (9m:42s)
... and then nothing more. The time reported in that last line just goes up and up until I kill the the build job. Adding the --debug switch to StatLight doesn't improve the above output either.
Right now, I've switched the TeamCity build step to call each test individually as I was in Jenkins, but this is more of a workaround than a proper solution. And of course, I may still run into the above problem - I've yet to find out.
What I'd like to know is what steps I can take to debug this issue properly, or whether there are known issues that can cause the above behaviour?
There was one issue fixed in the 1.5 version relating to teamcity. http://statlight.codeplex.com/workitem/13654
I'm not sure it will fix your issue, but would you mind upgrading, trying and reporting back?