Run selenium in different environments - selenium

Issue running into I have one selenium code that need to be run in different environments. One by one .the code in environment(sit) type a keyword and generate list of terms , another environment (prod) do the same thing but generate different list. I need to validate the first appearing term from the list in sit and prod .the code is failing because what is in sit is different from prod .Is there generic way that can be used to run one code on both environments even if they generate different results .Can you please direct me ?

There are several ways to achieve that.
One of the most appropriate (imho) ways to address environment independance is to use environment variables.
Another is to use property files holding different properties for different environments
Another one is to use your execution environment specific properties (like jvm properties in Java).
Options 1 and 3 are imho the most suitable for integrating your code into CI process.

You can passed those values via config file and read and use it in your test code

Related

Is it possible to specify a gitlab runner by name?

We have multiple runners that share a tag, and these tags can't be changed because of workplace policies. So we currently have something set up like this:
#12345 (Foo)
tag: foobar
#23456 (fOo)
tag: foobar
#34567 (foO)
tag: foobar
However when we run a job using the "foobar" tag, it sometimes fails solely depending on the runner that gets chosen. I ended up running the pipeline a dozen or so times to check, and runners #12345 and #23456 always end up failing, even when the build is fine. The #34567 runner succeeds when the build is fine and fails when the build isn't. The runner documentation says I can specify the runner by name, but looking over the keyword reference documentation I'm not seeing how to specify it.
It's not possible. The runners can only be selected by tags and runners with an identical tag should be homogeneous in terms of software versions and hardware. The fist one that is ready to take your CI job will run it.
So one should never need to select a specific runner, in a group that share a single tag.
Each job may known the runner executing it by looking at the environment variable CI_RUNNER_ID, but this is not usable for your purpose. Unless you force a job failure if the runner is not the "good" one, and retry it until it will be randomly taken by the runner you want. But of course this would be a weird solution.
No. The documentation is misleading. You can only use tags to limit what runner(s) your jobs run on.
The only other way you might have around this would be to register your own runner(s) for your project/group, giving them the tags you need. Though, I doubt that's an acceptable solution for obvious reasons.
Ultimately, your GitLab administrator will need to configure your runner(s) to have an additional tag by which you can uniquely identify the runner(s) if you want to be able to have your jobs use a specific runner out of your shared runner pool.

Is it possible to publish different tests results in Azure Devops?

I would like to publish different tests results. I have two suites. The last one overrides the results of the first one. Is it possible to get both on the same page below “tests and coverage”?
To answer this question, no there's not. You can merge the results of two different suite tests. You can also save the results elsewhere. But it's not possible to display two different results for two different suite tests inside the Azure DevOps Gui.

How to use environment specific test data in Karate

I would like to know how it is possible to use different data sets on runtime when executing tests in various environments. I have read the documentation but i am unable to find the best solution for this scenario.
Requirement: Execute a test in QA environment and then execute the same test in SIT. However, use different data in the request e.g customerIds. The reason for this is because the data setup in each environment is very different.
Would appreciate it if you could propose the best solution for this scenario.
Here in the documentation, you can find an explanation on how to do this : https://github.com/intuit/karate#environment-specific-config
Then you can simply specify the environment when launching karate :
mvn test -DargLine="-Dkarate.env=e2e"
And all your tests will be able to use the variables you've defined for the specified environment.
Edit: another hint, in your config file, specify the path of a file. Now, depending on your env, you'll be able to read a different file, containing all your data.
Edit after your comment :
Let's say you defined two environments, "qa" and "prod".
For every data where there is a difference between the two, simply create two files : myFile-qa.json and myFile-prod.json.
Now, in your tests, when you want to read a file, just read ('myFile-'+env+'.json'). And just like that, you read the correct file depending on your defined environment.

Can you run two test cases simultaneously in a Test Suite in Microsoft Test Manager 2010?

I am trying to create a unit test to run on two machines in Microsoft Test Manager 2010. In this test I want some client and server side test code to run simultaneously; the client side test being dependent on server side test working successfully.
When putting together a Test Suite in Test Manager, I want to be able to set both tests to have the same order value (so they run at the same time) but the validation prevents this; setting the order as shown below:
Is there any way I can achieve the simultaneous test execution I am after?
Sorry for the late answer... I've missed the notification about your answers to my question :-( Sorry for that!
In case you are still looking for solution, here my suggestion.
I suppose you have a test environment consisting of two machines (for server and client).
If so, you will not be able to run tests on both of them, or better to say you will not have enough control over running tests. Check How to Run automated tests on multiple computers at the same time
Actually I posted a related question to "Visual Studio Development Forum", you could check the answers I got here: Is it possible to run test on several virtual machines, which belong to the same environment, using build-deploy-test workflow
That all means you will end up creating two environments each consisting of one machine (one for server and one for client).
But then you will not be able to reference both environment in your build definition it you can only select one environment in DefaultLabTemplate.
That leads to the solution I can suggest:
Create two lab environments
Create three build definitions
the first one will only build your test code
the second one will deploy last successful build from the first one and start tests on the server environment
the third one will deploy last successful build from the first one and start tests on the client environment.
Run the first build definition automatically at night
Trigger the latter two simultaneously later.
It's not really nice, I know...
You will have to synchronize the build definition building the test code with the two build definitions running the tests.
I was thinking about setting up similar tests some months ago and it was the best solution I came up with...
Another option I have not tried yet could be:
Use a single test environment consisting of two machines and use different roles for them (server and client respectively).
In MTM create two Test Settings (one for the server role and one for the client role).
Create a bat file starting tests using tcm.exe tool (see How to: Run Automated Tests from the Command Line Using Tcm for more details).
You will need two tcm.exe calls, one for each Test Settings you have created.
Since a tcm.exe call just queues a test run an returns (more or less) immediately this bath file will start tests (more or less) simultaneously.
Create a build definition using DefaultLabTemplate.
This definition will:
build test code
deploy them to both machines in your environment
run your bath script as the last deployment step
(you will have to make sure this script is located on the build machine or deploy it there or make it accessible from the build machine)
As I've said, I have not tried it yet.
The disadvantage of this approach will be that you will not see the test part in the build log since the tests will not be started by means provided by DefaultLabTemplate. So the build will not fail when tests fail.
But you will still be able to see test outcomes in MTM and will have test results for each machine.
But depending on what is more important to you (having rest results or having build definition that fails if tests fail or having both) it could be a solution for you.
Yes, you can with modified TestSettings file.
http://blogs.msdn.com/b/vstsqualitytools/archive/2009/12/01/executing-unit-tests-in-parallel-on-a-multi-cpu-core-machine.aspx

Thread: How to deploy a Pentaho report with subreports across multiple environments

I'm trying to create reports that I could deploy in different environments (test, production) and/or with different databases, without changing the prpt file.
So, I created some jndis, and pased the jndi name as a parameter to a xaction that in turn executed the query and passed the result to the prpt. It worked great.
Until I started using subreports.
I think there's no way to pass a result set to a subreport for each line of the main report.
It seems that If you use subreports, you have to define the  connection and the query inside the subreport.
Am I wrong? Has anyone tried this? What's the "proper" way to deploy a multi-tenant report with subreports, and pass the connection or jndi as a parameter?
(I'm open to drop the use of jndi if there's another way)
Thanks!
Update: There's a bug related to this in biserver 3.7 & 3.8 link
nope, the connection can be defined in the parent report. just make sure you specify it in the Query name setting of the subreport itself.
XActions precompute all datasets before the reporting engine has a chance to actually work with them. External datasets are precomputed without any information on your subreports, and therefore it will fail (unless you use several ugly tricks to use a calculated query-name as a lookup-key into the precomputed table-models).
Why don't you use JNDI, like everyone else? JDNI was designed to abstract connection information into a logical name. The connection is defined outside of the report, and the report just references the name.
Read more on my blog post named: "Dont hardcode host names, use JDNI" (which probably describes the core of your problem ;) ).