How to read xml file key value using netflix archaius API? - netflix-archaius

I am new in Netflix Archaius. Can anyone please provide sample code to read key value from xml file ?
Also would like to see values are update automatically when I changed any key in XML file ...
Regards,
Ashish

Sample JUnit on how to use xml for key/value pairs and using Archaius to automatically load the changes :
Important note on the sample code : The program exits after the first callback. If you want to test more callbacks, increase the counter at class variable 'latch'
package com.test.config;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import org.apache.commons.configuration.XMLPropertiesConfiguration;
import org.junit.Test;
import com.netflix.config.AbstractPollingScheduler;
import com.netflix.config.ConcurrentMapConfiguration;
import com.netflix.config.ConfigurationManager;
import com.netflix.config.DynamicConfiguration;
import com.netflix.config.DynamicPropertyFactory;
import com.netflix.config.DynamicStringProperty;
import com.netflix.config.FixedDelayPollingScheduler;
import com.netflix.config.PollResult;
import com.netflix.config.PolledConfigurationSource;
public class TestArchaius {
CountDownLatch latch = new CountDownLatch(1);
#Test
public void tes() throws Exception {
AbstractPollingScheduler scheduler = new FixedDelayPollingScheduler(0, 1000, false);
DynamicConfiguration dynamicConfiguration = new DynamicConfiguration(new MyPolledConfigurationSource(), scheduler);
ConfigurationManager.install(dynamicConfiguration);
DynamicStringProperty fieldsProperty = DynamicPropertyFactory.getInstance().getStringProperty("key1", "");
fieldsProperty.addCallback(() -> {
System.out.println(fieldsProperty.get());
latch.countDown();
});
latch.await();
}
class MyPolledConfigurationSource implements PolledConfigurationSource {
#Override
public PollResult poll(boolean initial, Object checkPoint) throws Exception {
ConcurrentMapConfiguration configFromPropertiesFile = new ConcurrentMapConfiguration(
new XMLPropertiesConfiguration("test.xml"));
Map<String, Object> fullProperties = new HashMap<String, Object>();
configFromPropertiesFile.getProperties().forEach((k, v) -> fullProperties.put((String) k, v));
return PollResult.createFull(fullProperties);
}
}
}
test.xml :
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Description of the property list</comment>
<entry key="key1">value1</entry>
<entry key="key2">value2</entry>
<entry key="key3">value3</entry>
</properties>

Related

Error in BigQuery Snippets

I'm new to data flow and trying to get schema of table in big query dynamically.
Also i need to get the name of destination table dynamically for which i'm using dynamic destination class in BigQueryIO.write.to(). It works if the schema is provided for the destination table before executing the pipeline. But to get the schema dynamically i'm using BigQuery Snippets which takes datasetId and tableId as input and returns schema for a given table. It gives errors mentioned below when tried to run the pipeline with Snippets.
Any help is appreciated.
Thanks in advance.
Exception in thread "main" java.lang.NoSuchMethodError: com.google.api.client.googleapis.services.json.AbstractGoogleJsonClient$Builder.setBatchPath(Ljava/lang/String;)Lcom/google/api/client/googleapis/services/AbstractGoogleClient$Builder;
at com.google.api.services.bigquery.Bigquery$Builder.setBatchPath(Bigquery.java:3519)
at com.google.api.services.bigquery.Bigquery$Builder.<init>(Bigquery.java:3498)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryClient(BigQueryServicesImpl.java:881)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$200(BigQueryServicesImpl.java:79)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:388)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:345)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:105)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:676)
at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:640)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:656)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:660)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311)
at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:458)
at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:575)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:310)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
at project2.configTable.main(configTable.java:146)
Code:
package project2;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import org.apache.avro.Schema;
import org.apache.beam.runners.dataflow.DataflowRunner;
import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.io.gcp.bigquery.DynamicDestinations;
import org.apache.beam.sdk.io.gcp.bigquery.TableDestination;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.WriteDisposition;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.options.ValueProvider.NestedValueProvider;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.SerializableFunction;
import org.apache.beam.sdk.transforms.View;
import org.apache.beam.sdk.transforms.DoFn.ProcessContext;
import org.apache.beam.sdk.transforms.DoFn.ProcessElement;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PCollectionView;
import org.apache.beam.sdk.values.ValueInSingleWindow;
import com.google.api.services.bigquery.model.Table;
import com.google.api.services.bigquery.model.TableFieldSchema;
import com.google.api.services.bigquery.model.TableRow;
import com.google.api.services.bigquery.model.TableSchema;
import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.FieldList;
import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.DatasetInfo;
import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.FieldValueList;
import com.google.cloud.bigquery.InsertAllRequest;
import com.google.cloud.bigquery.InsertAllResponse;
import com.google.cloud.bigquery.LegacySQLTypeName;
import com.google.cloud.bigquery.QueryJobConfiguration;
import com.google.cloud.bigquery.StandardTableDefinition;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TableInfo;
import java.util.HashMap;
import java.util.Map;
import avro.shaded.com.google.common.collect.ImmutableList;
public class configTable {
public static void main(String[] args) {
// TODO Auto-generated method stub
customInt op=PipelineOptionsFactory.as(customInt.class);
op.setProject("my-new-project");
op.setTempLocation("gs://train-10/projects");
op.setWorkerMachineType("n1-standard-1");
op.setTemplateLocation("gs://train-10/main-template-with-snippets");
op.setRunner(DataflowRunner.class);
org.apache.beam.sdk.Pipeline p=org.apache.beam.sdk.Pipeline.create(op);
PCollection<TableRow> indata=p.apply("Taking side input",BigQueryIO.readTableRows().from("my-new-project:training.config"));
PCollectionView<String> view=indata.apply("Convert to view",ParDo.of(new DoFn<TableRow, String>() {
#ProcessElement
public void processElement(ProcessContext c) {
TableRow row=c.element();
c.output(row.get("file").toString());
}
})).apply(View.asSingleton());
PCollection<TableRow> mainop = p.apply("Taking input",TextIO.read().from(NestedValueProvider.of(op.getInputFile(), new SerializableFunction<String, String>() {
public String apply(String input) {
// TODO Auto-generated method stub
return "gs://train-10/projects/"+input;
}
} ))).apply("Transform",ParDo.of(new DoFn<String, TableRow>() {
#ProcessElement
public void processElement(ProcessContext c ) {
c.output(new TableRow().set("data", c.element()));
}
}));
mainop.apply("Write data",BigQueryIO.writeTableRows().to(new DynamicDestinations<TableRow, String>() {
#Override
public String getDestination(ValueInSingleWindow<TableRow> element) {
// TODO Auto-generated method stub
String d=sideInput(view);
String tablespec="my-new-project:training."+d;
return tablespec;
}
#Override
public List<PCollectionView<?>> getSideInputs() {
return ImmutableList.of(view);
}
#Override
public TableDestination getTable(String destination) {
// TODO Auto-generated method stub
//String dest=String.format("%s:%s.%s","my-new-project","training", destination);
String dest=destination;
return new TableDestination(dest, dest);
}
#Override
public TableSchema getSchema(String destination) {
BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
com.google.cloud.bigquery.Table table=bigquery.getTable("training", destination);
com.google.cloud.bigquery.Schema tbschema=table.getDefinition().getSchema();
FieldList tfld=tbschema.getFields();
List<TableFieldSchema> flds=new ArrayList<>();
for (Field each : tfld) {
flds.add(new TableFieldSchema().setName(each.getName()).setType(each.getType().toString()));
}
return new TableSchema().setFields(flds);
}
}).withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED).withWriteDisposition(WriteDisposition.WRITE_TRUNCATE));
p.run();
}
}
I don't think you can do both WRITE_TRUNCATE
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED).withWriteDisposition(WriteDisposition.WRITE_TRUNCATE))
and get the table's definition
com.google.cloud.bigquery.Table table=bigquery.getTable("training", destination);
com.google.cloud.bigquery.Schema tbschema=table.getDefinition().getSchema();
Because even if the table exists, it may be recreated when paired with a BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE and at that point, the getTable call will fail. In other words, WRITE_TRUNCATE is not an atomic operation.
I suggest that you have the table (with right schema) created before hand (CREATE_NEVER) or append to the table if it exists (WRITE_EMPTY or WRITE_APPEND) or store the schema outside of the dataflow pipeline and read it in.

Accessing TableRow columns in BigQuery Apache Beam

I am trying to
1.Read JSON events from Cloud Pub/Sub
2.Load the events from Cloud Pub/Sub to BigQuery every 15 minutes using file loads to save cost on streaming inserts.
3.The destination will differ based on "user_id" and "campaign_id" field in the JSON event, "user_id" will be dataset name and "campaign_id" will be the table name. The partition name comes from the event timestamp.
4.The schema for all tables stays same.
I am new to Java and Beam. I think my code mostly does what I am trying to do and I just a need little help here.
But I unable to access "campaign_id" and "user_id" field in the JSON message.
So, my events are not routing to the correct table.
package ...;
import com.google.api.services.bigquery.model.TableSchema;
import javafx.scene.control.TableRow;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.coders.Coder;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.io.gcp.bigquery.DynamicDestinations;
import org.apache.beam.sdk.io.gcp.bigquery.TableDestination;
import org.apache.beam.sdk.io.gcp.bigquery.TableRowJsonCoder;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubIO;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.ValueInSingleWindow;
import org.joda.time.Duration;
import org.joda.time.Instant;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import java.text.SimpleDateFormat;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.Method.FILE_LOADS;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.WriteDisposition.WRITE_APPEND;
public class ClickLogConsumer {
private static final int BATCH_INTERVAL_SECS = 15 * 60;
private static final String PROJECT = "pure-app";
public static PTransform<PCollection<String>, PCollection<com.google.api.services.bigquery.model.TableRow>> jsonToTableRow() {
return new JsonToTableRow();
}
private static class JsonToTableRow
extends PTransform<PCollection<String>, PCollection<com.google.api.services.bigquery.model.TableRow>> {
#Override
public PCollection<com.google.api.services.bigquery.model.TableRow> expand(PCollection<String> stringPCollection) {
return stringPCollection.apply("JsonToTableRow", MapElements.<String, com.google.api.services.bigquery.model.TableRow>via(
new SimpleFunction<String, com.google.api.services.bigquery.model.TableRow>() {
#Override
public com.google.api.services.bigquery.model.TableRow apply(String json) {
try {
InputStream inputStream = new ByteArrayInputStream(
json.getBytes(StandardCharsets.UTF_8.name()));
//OUTER is used here to prevent EOF exception
return TableRowJsonCoder.of().decode(inputStream, Coder.Context.OUTER);
} catch (IOException e) {
throw new RuntimeException("Unable to parse input", e);
}
}
}));
}
}
public static void main(String[] args) throws Exception {
Pipeline pipeline = Pipeline.create(options);
pipeline
.apply(PubsubIO.readStrings().withTimestampAttribute("timestamp").fromTopic("projects/pureapp-199410/topics/clicks"))
.apply(jsonToTableRow())
.apply("WriteToBQ",
BigQueryIO.writeTableRows()
.withMethod(FILE_LOADS)
.withWriteDisposition(WRITE_APPEND)
.withCreateDisposition(CREATE_IF_NEEDED)
.withTriggeringFrequency(Duration.standardSeconds(BATCH_INTERVAL_SECS))
.withoutValidation()
.to(new DynamicDestinations<TableRow, String>() {
#Override
public String getDestination(ValueInSingleWindow<TableRow> element) {
String tableName = "campaign_id"; // JSON message in Pub/Sub has "campaign_id" field, how do I access it here?
String datasetName = "user_id"; // JSON message in Pub/Sub has "user_id" field, how do I access it here?
Instant eventTimestamp = element.getTimestamp();
String partition = new SimpleDateFormat("yyyyMMdd").format(eventTimestamp);
return String.format("%s:%s.%s$%s", PROJECT, datasetName, tableName, partition);
}
#Override
public TableDestination getTable(String table) {
return new TableDestination(table, null);
}
#Override
public TableSchema getSchema(String destination) {
return getTableSchema();
}
}));
pipeline.run();
}
}
I arrived at the above code based on reading:
1.https://medium.com/myheritage-engineering/kafka-to-bigquery-load-a-guide-for-streaming-billions-of-daily-events-cbbf31f4b737
2.https://shinesolutions.com/2017/12/05/fun-with-serializable-functions-and-dynamic-destinations-in-cloud-dataflow/
3.https://beam.apache.org/documentation/sdks/javadoc/2.0.0/org/apache/beam/sdk/io/gcp/bigquery/DynamicDestinations.html
4.BigQueryIO - Write performance with streaming and FILE_LOADS
5.Inserting into BigQuery via load jobs (not streaming)
Update
import com.google.api.services.bigquery.model.TableFieldSchema;
import com.google.api.services.bigquery.model.TableRow;
import com.google.api.services.bigquery.model.TableSchema;
import com.google.api.services.bigquery.model.TimePartitioning;
import com.google.common.collect.ImmutableList;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.coders.Coder;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.io.gcp.bigquery.TableDestination;
import org.apache.beam.sdk.io.gcp.bigquery.TableRowJsonCoder;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubIO;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.values.PCollection;
import org.joda.time.Duration;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.Method.FILE_LOADS;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.WriteDisposition.WRITE_APPEND;
public class ClickLogConsumer {
private static final int BATCH_INTERVAL_SECS = 15 * 60;
private static final String PROJECT = "pure-app";
public static PTransform<PCollection<String>, PCollection<TableRow>> jsonToTableRow() {
return new JsonToTableRow();
}
private static class JsonToTableRow
extends PTransform<PCollection<String>, PCollection<TableRow>> {
#Override
public PCollection<TableRow> expand(PCollection<String> stringPCollection) {
return stringPCollection.apply("JsonToTableRow", MapElements.<String, com.google.api.services.bigquery.model.TableRow>via(
new SimpleFunction<String, TableRow>() {
#Override
public TableRow apply(String json) {
try {
InputStream inputStream = new ByteArrayInputStream(
json.getBytes(StandardCharsets.UTF_8.name()));
//OUTER is used here to prevent EOF exception
return TableRowJsonCoder.of().decode(inputStream, Coder.Context.OUTER);
} catch (IOException e) {
throw new RuntimeException("Unable to parse input", e);
}
}
}));
}
}
public static void main(String[] args) throws Exception {
Pipeline pipeline = Pipeline.create(options);
pipeline
.apply(PubsubIO.readStrings().withTimestampAttribute("timestamp").fromTopic("projects/pureapp-199410/topics/clicks"))
.apply(jsonToTableRow())
.apply(BigQueryIO.write()
.withTriggeringFrequency(Duration.standardSeconds(BATCH_INTERVAL_SECS))
.withMethod(FILE_LOADS)
.withWriteDisposition(WRITE_APPEND)
.withCreateDisposition(CREATE_IF_NEEDED)
.withSchema(new TableSchema().setFields(
ImmutableList.of(
new TableFieldSchema().setName("timestamp").setType("TIMESTAMP"),
new TableFieldSchema().setName("exchange").setType("STRING"))))
.to((row) -> {
String datasetName = row.getValue().get("user_id").toString();
String tableName = row.getValue().get("campaign_id").toString();
return new TableDestination(String.format("%s:%s.%s", PROJECT, datasetName, tableName), "Some destination");
})
.withTimePartitioning(new TimePartitioning().setField("timestamp")));
pipeline.run();
}
}
How about: String tableName = element.getValue().get("campaign_id").toString() and likewise for the dataset.
Besides, for inserting into time-partitioned tables, I strongly recommend using BigQuery's Column-Based Partitioning, instead of using a partition decorator in the table name. Please see "Loading historical data into time-partitioned BigQuery tables" in the javadoc - you'll need a timestamp column. (note that the javadoc has a typo: "time" vs "timestamp")

HSQLDB inmemory mode doesn't delete files on shutdown

I'm using HSQLDB version 2.2.9 for testing purposes.
When I create named in memory database, files aren't deleted after calling shutdown function. On my filesystem I have folder DBname.tmp and files DBname.lck, DBname.log, DBname.properties and DBname.script. As I understand documentation (http://hsqldb.org/doc/2.0/guide/dbproperties-chapt.html#dpc_connection_url) it shouldn't happened.
For testing I'm using the following code:
import java.io.IOException;
import org.hsqldb.Server;
import org.hsqldb.persist.HsqlProperties;
import org.hsqldb.server.ServerAcl.AclFormatException;
import org.junit.Test;
public class HSQLDBInMemTest {
#Test
public void test() throws IOException, AclFormatException {
HsqlProperties props = new HsqlProperties();
props.setProperty("server.database.0", "test1");
props.setProperty("server.dbname.0", "test1");
props.setProperty("server.database.1", "test2");
props.setProperty("server.dbname.1", "test2");
Server hsqlServer = new Server();
hsqlServer.setRestartOnShutdown(false);
hsqlServer.setNoSystemExit(true);
hsqlServer.setProperties(props);
hsqlServer.start();
hsqlServer.shutdown();
}
}
Answered here: http://sourceforge.net/mailarchive/message.php?msg_id=30881908 by fredt
The code should look like:
import java.io.IOException;
import org.hsqldb.Server;
import org.hsqldb.persist.HsqlProperties;
import org.hsqldb.server.ServerAcl.AclFormatException;
import org.junit.Test;
public class HSQLDBInMemTest {
#Test
public void test() throws IOException, AclFormatException {
HsqlProperties props = new HsqlProperties();
props.setProperty("server.database.0", "mem:test1");
props.setProperty("server.database.1", "mem:test2");
Server hsqlServer = new Server();
hsqlServer.setRestartOnShutdown(false);
hsqlServer.setNoSystemExit(true);
hsqlServer.setProperties(props);
hsqlServer.start();
hsqlServer.shutdown();
}
}
The path for a memory database looks like props.setProperty("server.database.0", "mem:test1");

JUnit reporter does not show detailed report for each step in JBehave

I'm trying to set up JBehave for testing web services.
Template story is running well, but I can see in JUnit Panel only Acceptance suite class execution result. What I want is to see execution result for each story in suite and for each step in story like it is shown in simple JUnit tests or in Thucydides framework.
Here is my acceptance suite class: so maybe I Haven't configured something, or either I have to notate my step methods some other way, but I didn't find an answer yet.
package ***.qa_webservices_testing.jbehave;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;
import org.jbehave.core.Embeddable;
import org.jbehave.core.configuration.Configuration;
import org.jbehave.core.configuration.MostUsefulConfiguration;
import org.jbehave.core.io.CodeLocations;
import org.jbehave.core.io.LoadFromClasspath;
import org.jbehave.core.io.StoryFinder;
import org.jbehave.core.junit.JUnitStories;
import org.jbehave.core.parsers.RegexPrefixCapturingPatternParser;
import org.jbehave.core.reporters.CrossReference;
import org.jbehave.core.reporters.Format;
import org.jbehave.core.reporters.StoryReporterBuilder;
import org.jbehave.core.steps.InjectableStepsFactory;
import org.jbehave.core.steps.InstanceStepsFactory;
import org.jbehave.core.steps.ParameterConverters;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import ***.qa_webservices_testing.jbehave.steps.actions.TestAction;
/**
* suite class.
*/
public class AcceptanceTestSuite extends JUnitStories {
private static final String CTC_STORIES_PATTERN = "ctc.stories";
private static final String STORY_BASE = "src/test/resources";
private static final String DEFAULT_STORY_NAME = "stories/**/*.story";
private static final Logger LOGGER = LoggerFactory.getLogger(AcceptanceTestSuite.class);
private final CrossReference xref = new CrossReference();
public AcceptanceTestSuite() {
configuredEmbedder()
.embedderControls()
.doGenerateViewAfterStories(true)
.doIgnoreFailureInStories(false)
.doIgnoreFailureInView(true)
.doVerboseFailures(true)
.useThreads(2)
.useStoryTimeoutInSecs(60);
}
#Override
public Configuration configuration() {
Class<? extends Embeddable> embeddableClass = this.getClass();
Properties viewResources = new Properties();
viewResources.put("decorateNonHtml", "true");
viewResources.put("reports", "ftl/jbehave-reports-with-totals.ftl");
// Start from default ParameterConverters instance
ParameterConverters parameterConverters = new ParameterConverters();
return new MostUsefulConfiguration()
.useStoryLoader(new LoadFromClasspath(embeddableClass))
.useStoryReporterBuilder(new StoryReporterBuilder()
.withCodeLocation(CodeLocations.codeLocationFromClass(embeddableClass))
.withDefaultFormats()
.withViewResources(viewResources)
.withFormats(Format.CONSOLE, Format.TXT, Format.HTML_TEMPLATE, Format.XML_TEMPLATE)
.withFailureTrace(true)
.withFailureTraceCompression(false)
.withMultiThreading(false)
.withCrossReference(xref))
.useParameterConverters(parameterConverters)
// use '%' instead of '$' to identify parameters
.useStepPatternParser(new RegexPrefixCapturingPatternParser(
"%"))
.useStepMonitor(xref.getStepMonitor());
}
#Override
protected List<String> storyPaths() {
String storiesPattern = System.getProperty(CTC_STORIES_PATTERN);
if (storiesPattern == null) {
storiesPattern = DEFAULT_STORY_NAME;
} else {
storiesPattern = "**/" + storiesPattern;
}
LOGGER.info("will search stories by pattern {}", storiesPattern);
List<String> result = new StoryFinder().findPaths(STORY_BASE, Arrays.asList(storiesPattern), Arrays.asList(""));
for (String item : result) {
LOGGER.info("story to be used: {}", item);
}
return result;
}
#Override
public InjectableStepsFactory stepsFactory() {
return new InstanceStepsFactory(configuration(), new TestAction());
}
}
my test methods look like:
Customer customer = new cutomer();
#Given ("I have Access to Server")
public void givenIHaveAccesToServer() {
customer.haveAccesToServer();
}
So they are notated only by JBehave notations.
The result returned in Junit panel is only like here (I yet have no rights to post images):
You should try this open source library:
https://github.com/codecentric/jbehave-junit-runner
It does exactly what you ask for :)
Yes, the codecentric runner works very nicely.
https://github.com/codecentric/jbehave-junit-runner

Play 2 framework testing simulate session and POST?

When running a web test like this
#Test
public void runInBrowser() {
running(testServer(3333), HtmlUnitDriver.class, new Callback<TestBrowser>() {
public void invoke(TestBrowser browser) {
browser.goTo("http://localhost:3333");
assertThat(browser.$("#title").getTexts().get(0)).isEqualTo("Hello Guest");
browser.$("a").click();
assertThat(browser.url()).isEqualTo("http://localhost:3333/Coco");
assertThat(browser.$("#title", 0).getText()).isEqualTo("Hello Coco");
}
});
}
How can one pass sessions values while using this kind of testing and how can one simulate a POST? Thanks :-)
These are Selenium tests with FluentLenium. Since you test with a browser you must use an HTML form with method POST to make a POST request.
browser.goTo("http://localhost:3333" + routes.Login.login().url());//example for reverse route, alternatively use something like "http://localhost:3333/login"
browser.fill("#password").with("secret");
browser.fill("#username").with("aUsername");
browser.submit("#signin");//trigger submit button on the form
//after finished request: http://www.playframework.org/documentation/api/2.0.4/java/play/test/TestBrowser.html
browser.getCookies(); //read only cookies
Maybe you don't want to make test with a browser but instead with HTTP you can use FakeRequests:
import static controllers.routes.ref.Application;
import static org.fest.assertions.Assertions.assertThat;
import static play.mvc.Http.Status.OK;
import static play.mvc.Http.Status.UNAUTHORIZED;
import static play.test.Helpers.*;
import play.libs.WS;
import java.util.HashMap;
import java.util.Map;
import org.junit.BeforeClass;
import org.junit.Test;
import play.mvc.Result;
import play.test.FakeRequest;
public class SoTest {
#Test
public void testInServer() {
running(testServer(3333), new Runnable() {
public void run() {
Fixtures.loadAll();//you may have to fill your database you have to program this yourself
Map<String, String> parameters = new HashMap<String, String>();
parameters.put("userName", "aUsername");
parameters.put("password", "secret");
FakeRequest fakeRequest = new FakeRequest().withSession("key", "value").withCookies(name, value, maxAge, path, domain, secure, httpOnly).withFormUrlEncodedBody(parameters);
Result result = callAction(Application.signIn(), fakeRequest);
int responseCode = status(result);
assertThat(responseCode).isEqualTo(OK);
}
});
}
}
Also check out this answer: How to manipulate Session, Request and Response for test in play2.0