How to run beans one after another - javabeans

I have created a project in which it has to run beans while initiated.
I have created 3 beans in dispatcherServlet.
How to run those beans in a order like there are 3 beans like A,B,C
it should run one after another. First A then B and then C

Assuming you are using a framework like Spring and assuming that by "running the beans" you mean something like an ApplicationRunner which runs once during the start up of the application you can simply annotate the bean methods with #Order.
The higher the number, the later the runner starts.
If instead the beans are dependencies you should inject them into each other in the necessary order (A into B and B into C). Then the framework will resolve them in the order needed.

Related

How to share/pass a variable among multiple thread groups within jmeter and without using beanshell assertion

I have declared 1 user defined variable (A=wait) in a test plan and I have 2 thread groups in the test plan. When 1st thread group completes it's execution then I have changed the value to "go" (A=go) using beanshell post processor. Now, in second thread group I want that (A) should be pick the updated value (means "go" not "wait") but I am not able to pick the updated value in 2nd thread group. I am not using any regular expression extractor, just using and updating user defined variable.
I tried beanshell pre and post processor. First I created 1 bean shell sampler in which I changed the value(vars.put("A1","go");) then I created 1 beanshell postprocessor (${__setProperty(A,${A1})}) in first thread group and then in 2nd thread group I added BeanShell preprocessor to get the value (${__property(A)})
I also used beanshell assertion to pass the variable to next thread group but next thread group didn't catched the updated value.
If you don't want to use scripting - take a look at Inter-Thread Communication Plugin
There is an example test plan showing how variables could be shared.
Going forward be aware that since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting and in Groovy you should avoid inlining JMeter Functions or Variables
So:
In first thread group:
props.put('A', 'go')
In second thread group:
go = props.get('A')
or if you prefer a function:
${__P(A,)}
Demo:

How to skip a testcase if a link is not present and go to next link in Robot framework

Scenario:
There are 5 Links in the Home page:
Link 1
Link 2
Link 3
Link 4
Link 5
Each of the above links are separate test cases, so there are a total of 5 test cases.
All the links may not present in all the sites, according to the requirements.
So I need to write a Robot framework test case which works dynamically for all the sites, Like 1 site may have 3 links only some has all the 5 links. So its like SKIPPING a particular Test case if that lisk is not present.
*** Keywords ***
Go to Manage Client Reports
Click Link link:Manage Client Reports
Can anyone help.
In the upcoming Robot Framework Release 4.0 a new test status skipped will be introduced. Here is a brief status about the release:
Past due by 27 days 87% complete
Major release concentrating on adding the skip status (#3622), IF/ELSE
(#3074) and enhancing the listener API (#3296 and #3538). Last major
release to support Python 2.
So it can be ready any time soon now.
This is what you can have New SKIP status #3622. There will be a Skip If and a Skip keywords and more to be used.
How to skip tests
There are going to be multiple ways:
A special exception that library keywords can use to mark a single test to be skipped. See also #3685.
BuiltIn keyword Skip (or Skip Test and Skip Task) that utilizes the aforementioned exception.
BuiltIn keyowrd Skip If to skip based on condition.
When the skipping exception is used in a suite setup, all tests in the suite are skipped.
Command line option --skip to unconditionally skip tests based on tags. Similar to --exclude but skipped tests are shown in logs/reports
with a skip status and not dropped from execution altogether.
Command line option --skiponfailure to skip tests if they fail. Similar effect than with the current --noncritical.
What about criticality
As already discussed in #2087, the skip status is very similar feature
than Robot's current criticality concept. There are many people who
would like to have both, but I don't think that's a good idea and
believe it's better to remove criticality when skipping is added.
Separate issue #3624 covers removing criticality and explains this in
more detail. Colors
Skip status needs a specific color to match current pass (green) and
fail (red). Yellow feels like a good candidate with a traffic light
metaphor, but I'm open for other ideas and we could possibly change
other colors as well. Probably should make colors configurable too --
currently only report background colors support it.
Report background color mentioned above needs some thinking as well.
Currently it's either green or red, but with the added skip status we
could use also yellow or whatever skip color we decide to use.
Different scenarios where different colors could be used are listed
below (assuming green/yellow/red scheme):
All tests pass. This is naturally green.
Any test fails. This is naturally red.
Any test is skipped (no failures). This probably should be green but could also be yellow.
All tests skipped. This could be yellow. Could also be green but that's a bit odd if all tests are yellow.
Depending on your deadlines you might won't be able to wait this release, nevertheless it is a good to know thing.
There is an advanced solution where you can generate your test cases run-time. To do so you have to implement a small library that also acts as a listener. This way it can have a start_suite method that will be invoked and it will get the suite(s) as Python object(s), robot.running.model.TestSuite. Then you could use this object along with Robot Framework's API to create new test cases. The idea below was inspired by and it is based on this blog post: Dynamically create test cases with Robot Framework.
DynamicTestLibrary.py:
from robot.running.model import TestSuite
class DynamicTestLibrary(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.top_suite = None
def _start_suite(self, suite, result):
self.top_suite = suite
self.top_suite.tests.clear() # remove placeholder test
def add_test_case(self, keyword, *args):
tc = self.top_suite.tests.create(name=keyword)
tc.keywords.create(name=keyword, args=args)
globals()[__name__] = DynamicTestLibrary
UPDATE for Robot Framework 4.0
Due to the backward incompatible changes (Running and result models have been changed) made in the 4.0 release the add_test_case function should be change as below if you are using version above 4.0.
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.body.create_keyword(name=keyword, args=args)
You can utilize this library in a suite setup, in which you check which links are present and add test cases for the ones that are available.
test.robot
*** Settings ***
Library DynamicTestLibrary
Suite Setup Check Links And Generate Test Cases
*** Variables ***
##{LINKS} Manage Clients # test input 1
#{LINKS} Manage Clients Manage Client Hardware # test input 2
##{LINKS} Manage Clients Manage Client Hardware Manage Client Reports # test input 3
*** Test Cases ***
Placeholder
[Documentation] Placeholder test that will be removed during execution.
No Operation
*** Keywords ***
Check Links And Generate Test Cases
FOR ${link} IN #{LINKS}
DynamicTestLibrary.Add Test Case Go to ${link}
END
Go to Manage Client Reports
Log Many Click Link link:Manage Client Reports
Go to Manage Client Hardware
Log Many Click Link link:Manage Client Hardware
Go to Manage Clients
Log Many Click Link link:Manage Clients
Go to ${link} will give the appropriate keyword name that will be called in a test case with the same name. You can check with each example input list that the number of executed tests will be equal with the length of the list.
Here is the output:
# robot --pythonpath . test.robot
==============================================================================
Test
==============================================================================
Go to Manage Clients | PASS |
------------------------------------------------------------------------------
Go to Manage Client Hardware | PASS |
------------------------------------------------------------------------------
Test | PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================

Inconsistent behavior of Quartz2 scheduler in Apache Camel

I have an Apache Camel project that is using Quartz2 as the scheduler. The requirement is to make it a cluster. The code is deployed to weblogic 12c. the quartz is configured as per many samples with clustering enabled.
This is my properties file (without the datasource)
org.quartz.scheduler.instanceName = MyScheduler
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.scheduler.jobFactory.class = org.quartz.simpl.SimpleJobFactory
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties=true
org.quartz.JobBuilder.requestRecovery=true
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
When I deploy and start both nodes I see that the QRTZ_SCHEDULER_STATE table has extra entry for one of the nodes:
MyScheduler-routerContext server_node21567108546690
MyScheduler-routerContext-1 server_node11565896495100
MyScheduler-routerContext-1 server_node11567108547295
And I am guessing because of that the one node is being called once in a while while the other node gets called all the time (so occasionally both nodes are invoked at the same time).
I have tried to do a clean restart of weblogic nodes but the issue is still there
This is how my route(s) look like:
from("quartz2://provRegGroup/createUsersTrigger?cron={{create_users_cron}}&job.name=createUsersJob")
.routeId("createUsersRB")
.log("**** starting check for create users");
//where
//create_users_cron=0+0,5,10,15,20,25,30,35,40,45,50,55+*+*+*+?
//expecting one node being called by the scheduler at a time..
I figured out what caused the issue. apparently there were orphan weblogic processes that were running on one (or even both nodes) - this would be a question to our tech archs - why this was such a mess.. ps was showing two weblogic servers running on a node - one that I started recently and one that was there for say a month..
expecting this would never happen to production environment I assume the issue has been resolved..

Geb, Spock, Gradle and maxParallelForks

I am having some trouble understanding an issue we are having with our Geb/Spock tests. We are using gradle and we are trying to run our tests in parallel. As I understand it, the maxParallelForks property in gradle will run test classes in separate JVMs.
The issue I am running into is when I have 6 test classes and I set maxParallelForks to 4, when the test starts I will get 4 test classes running in parallel. Awesome! But the final 2 classes is where the problem is. Let's say out of the first 4 classes running, 2 of the classes are done in 1 minute and 2 of the classes are done in 5 minutes. What I'm seeing is instead of the first 2 finishing and starting the next 2 classes, it seems to waiting until the last 2 long running classes finish before spinning up the other forks. This is way less than ideal.
Am I misunderstanding something or am I missing a property somewhere? This is what I have in my build.gradle:
tasks.withType(Test) {
systemProperties System.properties
maxParallelForks = 4
forkEvery = 1
}
Classes are assigned to forks for execution upfront and not on a polling basis. So the first two forks will get two classes assigned upfront and the other two one each regardless of how long each of these classes takes to finish. In worst case scenario two of the longest running classes will be assigned to the a single fork. This is how it works - classes are split into groups and then separate test jvms (forks) are spun up with the list of classes to execute for each of them.
On a side note - you don't want to do forkEvery = 1 - this will restart your test jvms after each test class slowing your test execution down for no benefit.
Using JUNIT suites you can decide which set of classes need to be picked by a particular fork.
import org.junit.runner.RunWith
import org.junit.runners.Suite
#RunWith(Suite.class)
#Suite.SuiteClasses([
TimeTaking.class, // Class that takes a lot of time
NotSoMuchTimeTaking.class, //Class that is quick
// Add more test classes which need to be executed in same fork.
])
public class FirstTestSuite { // keep this empty
}
Similarly, create a SecondTestSuite {
} and so on..
In addition to above steps, include the *TestSuite.class in your build.gradle
tasks.withType(Test) {
systemProperties System.properties
maxParallelForks = 4
forkEvery = 1
include '**/*TestSuite*.class'
}
This way, you will be able to control your execution and decide which test classes need to be executed in what order.

Apache ODE - BPEL compensation handler - weird behavior (or may be I'm wrong somewhere

I've got this example of BPEL from this location https://svn.wso2.org/repos/wso2/carbon/platform/trunk/products/bps/modules/samples/product/src/main/resources/bpel/2.0/SampleCompensationHandlers/FlightReservationProcess/
The example sets a given variable when executes a given scope.
The last scope throws an error, so the fault triggers the relevant handler for that scope, which rethrows the fault. That way the fault handler for the process is triggered, where the compensation is made for every successfully completed scope.
I've created a BPEL project in Eclipse and I've put the example in there, then I've started some tests. But I've found a very strange behavior:
I've got correct results just few times:
CarReservationActivity: 1 CarReservationCompensated: 1 HotelReservationActivity: 1 HotelReservationCompensated: 1 FlightReservatoinActivity: 1
In all other cases I've got incorrect results:
A)
CarReservationActivity: 1 CarReservationCompensated: 0 HotelReservationActivity: 1 HotelReservationCompensated: 1 FlightReservatoinActivity: 1
B)
CarReservationActivity: 1 CarReservationCompensated: 1 HotelReservationActivity: 1 HotelReservationCompensated: 0 FlightReservatoinActivity: 1
And when the result is incorrect then the case A) dominates.
I can not find out where is the problem. Everything looks fine.
Can someone help me to solve the issue ?
Used software :
- Windows 7 Enterprise, SP1, 32bit
- Apache Tomcat v.6.0.18
- Apache ODE v.1.3.5
- Eclipse Indigo v.3.7.2 SR2
- BPEL designer v.1.0.1
- Java 7 (v.1.7.0_07)