When I am running test from Mobile Test Workbench Workspace, I am not able to see Resource monitoring or other performance parameter in Run Configuration dialog box.
I am using 8.7 MobileFirst Rational Test Workbench
It is unclear from your question whether or not you have properly enabled Resource Monitoring. This feature is meant to be enabled as described in the following user documentation topic: http://www-01.ibm.com/support/knowledgecenter/SSBLQQ_8.6.0/com.ibm.rational.test.lt.moeb.doc/topics/t_run_wb.html
Resource monitoring starts immediately after the test run is
initiated. Resource data is collected at the frequency specified in
the polling interval. At every interval the collected data is averaged
for resources and sent back to workbench to be plotted in the report.
Resource monitoring is terminated when the test run finishes.
Procedure From the test workbench, you can initiate the running of a
mobile test by using any of the following steps in the Test Workbench
perspective: In the Test Navigator view, right-click a test and click
Run As > Test. From the Test Navigator view, open the test and, in the
test editor, click the Run button.
Add the mobile test to the Compound Test editor. A wizard displays the
list of available devices. Select the device on which the test will be
run.
Related
I have a micro services system written in java and deployed using docker containers.
In order to run our nightly tests, a tester container is started, performs it's e2e tests upon the system and creates it's junit report.
I wish to generate a summarized report at the end of the tester's run, simply a list of failed tests, and send it to another server for long time storing and analyzing.
I suppose I could alter said Tester's Dockerfile, and add this functionality as a command which process the junit report and send it, but I wonder what should be the best practice, both for generating the report and sending it.
Thanks in advance,
Ariel
Using TestCafe, when running a test suite that already exists, and includes a signin function that clicks through and works successfully using PIV Card authentication and code that was written manually all works well. However, in the essence of speed, I was hoping to record tests using the TestCafe Studio software to get it done quicker than manually doing it.The issue we are running into, if I click "record a new test" on an existing test script, it attempts to run through the existing signin code, (which works if you run it as a normal test executing and not recording), it fails on logging in. I believe it's due to the nature of PIV. I was wondering, is it possible with testcafe studio, a setting perhaps that might maintain state of a logged in session rather than killing it at each test start? I'm already logged in with my PIV when I start the test, but it's appearing to log me out at the start of each session. Anyone have experience with this and know what I can do to make it remember me when I run a new test?
If I correctly understand your usage scenario, you are running tests using a user profile, which is already logged in. Unfortunately, TestCafe Studio does not have a mechanism for launching recording with profiles, as well as launching a browser with any options in general.
I can suggest you the following workaround:
Turn off the code responsible for the login when recording the script.
Use the await t.debug() function instead of login.
When the test starts, wait for the debug panel to appear in the browser and unlock the page:
Sign in manually.
Click Resume on the debug panel.
Write down your script.
Replace await t.debug() with a login function.
I connect remote windows server in Jenkins by node&slave concept and I create a "freestyle" job and run batch command in Jenkins in that command I wrote selenium scripts. the job was working fine.
When a test fails it takes a screenshot of the browser window. Unfortunately, the browser is always a fairly low resolution: 1024 x 656.
If I run the tests manually within the VM (logging in and running them outside of Jenkins) then they're run at the desktop resolution of the login.
So, my question: how can I set the screen resolution that is used when the tests are run by the Jenkins Service? Is this possible?
I have a build server running Windows Server 2008 R2. It runs a suite of automated acceptance tests which use the Selenium web driver. These tests are triggered automatically after a check-in and are failing due to having too small of a screen resolution. They are unable to access elements that are in a modal window because the modal window is too big to fit within the limited viewport that the tests are running in.
If I RDP into the machine at 1280 x 1024, I can run the tests and see that they pass just fine. Is there a way to specify the "default" resolution for a windows box when a user isn't actually logged in with a monitor?
Thanks!
I found a link, it looks like you can set your desired resolution with regedit: http://philipflint.wordpress.com/2008/06/30/changing-the-screen-resolution-in-windows-server-2008-server-core/
From the URL:
HKLM\System\CurrentControlSet\Control\Video\{ClassID}000\DefaultSettings.XResolution
HKLM\System\CurrentControlSet\Control\Video\{ClassID}000\DefaultSettings.Yresolution
The ClassID is a GUID. There is one for each display driver installed on your system. You can tell which one is currently in use as below the 0000 Key you will have another Key called “Volatile Settings”.
Appologies if this is a bit of a noob queston.
I have Selenium RC setup on a server and a test hub application.
When a user selects to run a test from the test hub, which browsers will the test be run on. Those on the server or those on the users machine?
Basically I want my test hub app to serve 2 purposes, for a user to trigger and watch a test to be run and 2ndly for the underlying tests to be accessible so they can be run automatically by the build server using Cruise Control.
I personally wouldnt worry about running the tests on the persons PC and go for the option of video recording the tests while they run so that the person can have a look at the video once the test is complete.
I would set up a Selenium Grid that when you trigger a test it will then push the test to the grid and then record them. I did a blog post in May that describes how to set up video recording on Linux.
If you don't have the time or hardware to set this all up you can always trigger the tests to run on Sauce Labs and they will record the video for you automatically so that people can see them