Analog of TestNG Reporter.getOutput() in Spock - testing

I run API tests using Groovy and Spock.
Request/response data, produced by third-party libraries, appear in the system out (I see it in the Jenkins log).
Question:
what is the proper way to start-stop system out recording for each test iteration to some list of strings?
TestNG has Reporter.getOutput(result) which returns all log entries, appeared while test iteration run.
Is it something similar in Spock?
Am I right assuming it should be some implementation of Run listener where I start recording in beforeIteration() and attach it to report in afterIteration()?

Solved by using the OutputCapture from
spring-boot-starter-test
The sample of RunListener:
class RunListener extends AbstractRunListener {
OutputCapture outputCapture
void beforeSpec(SpecInfo spec) {
Helper.log "[BEFORE SPEC]: ${spec.name}"
outputCapture = new OutputCapture()
outputCapture.captureOutput() //register a copy off system.out, system.err streams
}
void beforeFeature(FeatureInfo feature) {
Helper.log "[BEFORE FEATURE]: ${feature.name}", 2
}
void beforeIteration(IterationInfo iteration) {
outputCapture.reset() //clear the stream copy before each test iteration
}
void afterIteration(IterationInfo iteration) {
}
void error(ErrorInfo error) {
//attach the content of copy stream object to the report if test iteration failed
Allure.addAttachment("${error.method.iteration.name}_console_out", "text/html", outputCapture.toString(), "txt")
}
void afterFeature(FeatureInfo feature) {
}
void afterSpec(SpecInfo spec) {
outputCapture.releaseOutput()
}
}
Shortly:
CaptureOutput is an implementation of Output Stream which logs in both initial out and copy stream object. The copy gets cleared in beforeIteration() and is attached to the report in afterIteration() in RunListener, so each test receives its own part of output.

Related

GoogleTest: is there a generic way to add a function call prior to each test case?

my scenario: I have an existing unit test framework with ~3000 individual test cases. They are made from TEST, TEST_F and TEST_P macros.
Internally the tested modules make use of a logger library and now my goal is to create individual log files for each test case. To do so I would like to call a function as a SetUp for each test case.
Is there a way to register such function at the framework and get it called automatically?
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
I do like the idea of registering a global setup at the framework with AddGlobalTestEnvironment() but as I understand this is handled only once per executable.
By the way: I have acceptance tests implemented in robot test and guess what? I want to repeat the task there...
Thanks for any inspiration!
Christoph
You mentioned:
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
If the reason that you think you would need to touch every single test case is to set the filename differently, you can use the combination of SetUp() function and the current_test_info provided by GTest to get the test name for each test, and then use that to create a separate file for each test.
Here is an example:
// Class for test fixture
class MyTestFixture : public ::testing::Test {
protected:
void SetUp() override {
test_name_ = std::string(
::testing::UnitTest::GetInstance()->current_test_info()->name());
std::cout << "test_name_: " << test_name_ << std::endl;
// CreateYourLogFileUsingTestName(test_name_);
}
std::string test_name_;
};
TEST_F(MyTestFixture, Test1) {
EXPECT_EQ(this->test_name_, std::string("Test1"));
}
TEST_F(MyTestFixture, Test2) {
EXPECT_EQ(this->test_name_, std::string("Test2"));
}
Live example here: https://godbolt.org/z/YjzEG3G77
The solution I found in the gtest docs:
class TraceHandler : public testing::EmptyTestEventListener
{
// Called before a test starts.
void OnTestStart( const testing::TestInfo& test_info ) override
{
// set the logfilename here
}
// Called after a test ends.
void OnTestEnd( const testing::TestInfo& test_info ) override
{
// close the log here
}
};
int main( int argc, char** argv )
{
testing::InitGoogleTest( &argc, argv );
testing::TestEventListeners& listeners =
testing::UnitTest::GetInstance()->listeners();
// Adds a listener to the end. googletest takes the ownership.
listeners.Append(new TraceHandler);
return RUN_ALL_TESTS();
}
This way it automatically applies to all tests linked to this main-function.
Maybe I have to mention: my logger is a collection of static functions that send udp-packets to a receiver that cares for actual logging. I can control the filename by one of that functions. That's the reason why I don't need to insert code in every single TEST, TEST_F or TEST_P.

How to Take Screenshot when TestNG Assert fails?

String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.

Testing Flink with embedded Kafka

I have a simple Flink application, which sums up the events with the same id and timestamp within the last minute:
DataStream<String> input = env
.addSource(consumerProps)
.uid("app");
DataStream<Event> events = input.map(record -> mapper.readValue(record, Event.class));
pixels
.assignTimestampsAndWatermarks(new TimestampsAndWatermarks())
.keyBy("id")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(simpleNotificationServiceSink);
env.execute(jobName);
private static class TimestampsAndWatermarks extends BoundedOutOfOrdernessTimestampExtractor<Pixel> {
public TimestampsAndWatermarks() {
super(Time.seconds(90));
}
// timestampReadable is timestamp rounded on minutes, in format yyyyMMddhhmm
#Override
public long extractTimestamp(Pixel pixel) {
return Long.parseLong(pixel.timestampReadable);
}
}
I would like to implement the scenario:
Start embedded Kafka
Publish couple of messages to the topic
Consume the messages with Flink
Check the correctness of the output produced by Flink
Does Flink provides utilities to test the job with embedded Kafka? If yes, what is the recommended approach?
Thanks.
There's a JUnit rule you can use to bring up an embedded Kafka -- see (see https://github.com/charithe/kafka-junit).
To have tests that terminate cleanly, try something like this:
public class TestDeserializer extends YourKafkaDeserializer<T> {
public final static String END_APP_MARKER = "END_APP_MARKER"; // tests send as last record
#Override
public boolean isEndOfStream(ParseResult<T> nextElement) {
if (nextElement.getParseError() == null)
return false;
if (END_APP_MARKER.equals(nextElement.getParseError().getRawData()))
return true;
return false;
}
}

How to persist/read-back Run Configuration parameters in Intellij plugin

I'm making a basic IntelliJ plugin that lets a user define Run Configuration (following the tutorial at [1]), and use said Run Configurations to execute the file open in the editor on a remote server.
My Run Configuration is simple (3 text fields), and I have it all working, however, after editing the Run Configuration, and click "Apply" or "OK" after changing values, the entered values are lost.
What is the correct way to persist and read-back values (both when the Run Configuration is re-opened as well as when the Run Configuration's Runner invoked)? It looks like I could try to create a custom persistence using [2], however, it seems like the Plugin framework should have a way to handle this already or at least hooks for when Apply/OK is pressed.
[1] https://www.jetbrains.org/intellij/sdk/docs/tutorials/run_configurations.html
[2] https://www.jetbrains.org/intellij/sdk/docs/basics/persisting_state_of_components.html
Hopefully, this post is a bit more clear to those new to IntelliJ plugin development and illustrates how persisting/loading Run Configurations can be achieved. Please read through the code comments as this is where much of the explanation takes place.
Also now that SettingsEditorImpl is my custom implementation of the SettingsEditor abstract class, and likewise, RunConfigurationImpl is my custom implementation of the RunConfigiration abstract class.
The first thing to do is to expose the form fields via custom getters on your SettingsEditorImpl (ie. getHost())
public class SettingsEditorImpl extends SettingsEditor<RunConfigurationImpl> {
private JPanel configurationPanel; // This is the outer-most JPanel
private JTextField hostJTextField;
public SettingsEditorImpl() {
super();
}
#NotNull
#Override
protected JComponent createEditor() {
return configurationPanel;
}
/* Gets the Form fields value */
private String getHost() {
return hostJTextField.getText();
}
/* Copy value FROM your custom runConfiguration back INTO the Form UI; This is to load previously saved values into the Form when it's opened. */
#Override
protected void resetEditorFrom(RunConfigurationImpl runConfiguration) {
hostJTextField.setText(StringUtils.defaultIfBlank(runConfiguration.getHost(), RUN_CONFIGURATION_HOST_DEFAULT));
}
/* Sync the value from the Form UI INTO the RunConfiguration which is what the rest of your code will interact with. This requires a way to set this value on your custom RunConfiguration, ie. RunConfigurationImpl##setHost(host) */
#Override
protected void applyEditorTo(RunConfigurationImpl runConfiguration) throws ConfigurationException {
runConfiguration.setHost(getHost());
}
}
So now, the custom SettingsEditor, which backs the Form UI, is set up to Sync field values In and Out of itself. Remember, the custom RunConfiguration is what is going to actually represent this configuration; the SettingsEditor implementation just represents the FORM (a subtle difference, but important).
Now we need a custom RunConfiguration ...
/* Annotate the class with #State and #Storage, which is used to define how this RunConfiguration's data will be persisted/loaded. */
#State(
name = Constants.PLUGIN_NAME,
storages = {#Storage(Constants.PLUGIN_NAME + "__run-configuration.xml")}
)
public class RunConfigurationImpl extends RunConfigurationBase {
// Its good to 'namespace' keys to your component;
public static final String KEY_HOST = Constants.PLUGIN_NAME + ".host";
private String host;
public RunConfigurationImpl(Project project, ConfigurationFactory factory, String name) {
super(project, factory, name);
}
/* Return an instances of the custom SettingsEditor ... see class defined above */
#NotNull
#Override
public SettingsEditor<? extends RunConfiguration> getConfigurationEditor() {
return new SettingsEditorImpl();
}
/* Return null, else we'll get a Startup/Connection tab in our Run Configuration UI in IntelliJ */
#Nullable
#Override
public SettingsEditor<ConfigurationPerRunnerSettings> getRunnerSettingsEditor(ProgramRunner runner) {
return null;
}
/* This is a pretty cool method. Every time SettingsEditor#applyEditorTo() is changed the values in this class, this method is run and can check/validate any fields! If RuntimeConfigurationException is thrown, the exceptions message is shown at the bottom of the Run Configuration UI in IntelliJ! */
#Override
public void checkConfiguration() throws RuntimeConfigurationException {
if (!StringUtils.startsWithAny(getHost(), "http://", "https://")) {
throw new RuntimeConfigurationException("Invalid host");
}
}
#Nullable
#Override
public RunProfileState getState(#NotNull Executor executor, #NotNull ExecutionEnvironment executionEnvironment) throws ExecutionException {
return null;
}
/* This READS any prior persisted configuration from the State/Storage defined by this classes annotations ... see above.
You must manually read and populate the fields using JDOMExternalizerUtil.readField(..).
This method is invoked at the "right time" by the plugin framework. You dont need to call this.
*/
#Override
public void readExternal(Element element) throws InvalidDataException {
super.readExternal(element);
host = JDOMExternalizerUtil.readField(element, KEY_HOST);
}
/* This WRITES/persists configurations TO the State/Storage defined by this classes annotations ... see above.
You must manually read and populate the fields using JDOMExternalizerUtil.writeField(..).
This method is invoked at the "right time" by the plugin framework. You dont need to call this.
*/
#Override
public void writeExternal(Element element) throws WriteExternalException {
super.writeExternal(element);
JDOMExternalizerUtil.writeField(element, KEY_HOST, host);
}
/* This method is what's used by the rest of the plugin code to access the configured 'host' value. The host field (variable) is written by
1. when writeExternal(..) loads a value from a persisted config.
2. when SettingsEditor#applyEditorTo(..) is called when the Form itself changes.
*/
public String getHost() {
return host;
}
/* This method sets the value, and is primarily used by the custom SettingEditor's SettingsEditor#applyEditorTo(..) method call */
public void setHost(String host) {
this.host = host;
}
}
To read these configuration values elsewhere, say for example a custom ProgramRunner, you would do something like:
final RunConfigurationImpl runConfiguration = (RunConfigurationImpl) executionEnvironment.getRunnerAndConfigurationSettings().getConfiguration();
runConfiguration.getHost(); // Returns the configured host value
See com.intellij.execution.configurations.RunConfigurationBase#readExternal as well as com.intellij.execution.configurations.RunConfigurationBase#loadState and com.intellij.execution.configurations.RunConfigurationBase#writeExternal

Infrom Selenium that feature file starts \ ends

I have to make some operations when a Feature file starts or ends.
But I didn't find any way that Selenium can know it.
Meanwhile I use a specific hook tag to catch the beginning and another one to catch the end. But Is there a way to know it in Selenium code?
You can use before and after hook to perform some actions, you can add extra tag to your cucumber scenario and check it by scenario.getSourceTagNames(). see the example below:
#Before
public void setUpScenario(Scenario scenario) {
List<String> tags = scenario.getSourceTagNames();
if (tags.contains(scenario_specific_tag)) {
System.out.println("Before your scenario running ...." );
}
}
#After
public void endUpScenario(Scenario scenario) {
List<String> tags = scenario.getSourceTagNames();
if (tags.contains(scenario_specific_tag)) {
System.out.println("After your scenario ...." );
}
}