Unpredictable behavior around `during` and `atMost` of Awaitility - testing

While I was testing Kafka I end up with some issues around Awaitility. The goal was to test that Kafka topic doesn't contain the new records for a specified time. This is the simplified scratch of my test but it shows the problem. I expect that ConditionTimeoutException doesn't throw in this case. But it does.
public static void main(String[] args) {
List<String> list = new ArrayList<>();
await("wait").during(5_000, TimeUnit.MILLISECONDS).atMost(5_000, TimeUnit.MILLISECONDS)
.pollInterval(100, TimeUnit.MILLISECONDS)
.until(() -> list, List::isEmpty);
}
I increased the atMost timeout to 5000 + pollInterval -> 5100 but still end up with exception. On my local machine throwing exception was stoped closed the value of atMost timeout of 5170-5180.
Should I keep something in mind? Or may the test isn't correct?

In your case, you want to ensure, that the list is still empty after 5 seconds. Just remove "atMost" part. This code should work:
public static void main(String[] args) {
List<String> list = new ArrayList<>();
await("wait").during(5_000, TimeUnit.MILLISECONDS)
.pollInterval(100, TimeUnit.MILLISECONDS)
.until(() -> list, List::isEmpty);
}
If you want to use "during" with "atMost" together, ensure, that atMost time is greater:
public static void main(String[] args) {
List<String> list = new ArrayList<>();
await("wait").during(5_000, TimeUnit.MILLISECONDS).atMost(5_500, TimeUnit.MILLISECONDS)
.pollInterval(100, TimeUnit.MILLISECONDS)
.until(() -> list, List::isEmpty);
}

Related

can Ignite Streamer.addData be executed on separate node from the StreamReceiver/Visitor?

Is it possible to do Stream injection from a Client Node and intercept the same stream in the Server Node to process the stream before inserting in the cache ?
The reason for doing this is that the Client Node receives the stream from an external source and the same needs to be injected into a partitioned cache based on AffinityKey across multiple server nodes. The stream needs to be intercepted on each node and processed with the lowest latency.
I could've used cache events to do this but StreamVisitor is supposed to be faster.
following is the sample that i am trying to execute. Start 2 nodes : one containing the streamer, other containing the streamReciever :
public class StreamerNode {
public static void main(String[] args) {
......
Ignition.setClientMode(false);
Ignite ignite = Ignition.start(igniteConfiguration);
CacheConfiguration<SeqKey, String> myCfg = new CacheConfiguration<SeqKey, String>("myCache");
......
IgniteCache<SeqKey, String> myCache = ignite.getOrCreateCache(myCfg);
IgniteDataStreamer<SeqKey, String> myStreamer = ignite.dataStreamer(myCache.getName()); // Create Ignite Streamer for windowing data
for (int i = 51; i <= 100; i++) {
String paddedString = org.apache.commons.lang.StringUtils.leftPad(i+"", 7, "0") ;
String word = "TEST_" + paddedString;
SeqKey seqKey = new SeqKey("TEST", counter++ );
myStreamer.addData(seqKey, word) ;
}
}
}
public class VisitorNode {
public static void main(String[] args) {
......
Ignition.setClientMode(false);
Ignite ignite = Ignition.start(igniteConfiguration);
CacheConfiguration<SeqKey, String> myCfg = new CacheConfiguration<SeqKey, String>("myCache");
......
IgniteCache<SeqKey, String> myCache = ignite.getOrCreateCache(myCfg);
IgniteDataStreamer<SeqKey, String> myStreamer = ignite.dataStreamer(myCache.getName()); // Create Ignite Streamer for windowing data
myStreamer.receiver(new StreamVisitor<SeqKey, String>() {
int i=1 ;
#Override
public void apply(IgniteCache<SeqKey, String> cache, Map.Entry<SeqKey, String> e) {
String tradeGetData = e.getValue();
System.out.println(nodeID+" : visitorNode ..count="+ i++ + " received key="+e.getKey() + " : val="+ e.getValue());
//do some processing here before inserting in the cache ..
cache.put(e.getKey(), tradeGetData);
}
});
}
}
Of course it can be executed on a different node. Usually, addData() is executed on client node, and StreamReceiver works on server node. You don't have to do anything special to make it happen.
As for the rest of your post, can you elaborate it with more details and samples perhaps? I could not understand the setup that is desired.
You can use continuous queries if you don't need to modify data, only act on it.

Selenium TestNG - Second iteration shows the same assertion failure when a softassert fails in first iteration (second should pass)

I have to verify an integer value on a page with different clients using dataproviders. I am using softassert, so that the execution doesn't stop. however, when one softassert fails (intentionally) in first iteration, the subsequent iteration fails (should pass) abruptly and throws the exact same assertion error as was thrown in first iteration. but if first iteration passes, second continues properly. Where could be the issue ?
#BeforeMethod
public void beforeMethod(Method method) {
System.setProperty("webdriver.chrome.driver","C:\\Users\\a0136300\\Downloads\\chromedriver_win32\\chromedriver.exe");
driver = new ChromeDriver();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
config = new Configreader();
driver.manage().deleteAllCookies();
}
#DataProvider(name = "TestMSSData")
public Object [][] getData(){
Object [][] data=new Object[2][6];
data[0][0]="url1";
data[0][1]="client1";
data[0][2]="Retail Eligibles : 2018";
data[0][3]="E1";
data[0][4]="Maria_Fake";
data[0][5]=7937;
data[1][0]="url2";
data[1][1]="client2";
data[1][2]="ACTIVE- FAC : 2018";
data[1][3]="E2";
data[1][4]="Tad_Fake";
data[1][5]=4761;
return data;
}
#Test(dataProvider = "TestMSSData")
public void Manager_Self_Service(String url, String client, String eliggrp, String empnumber, String firstname, Integer SSN) throws Exception {
driver.get(url);
try {
String headertex = driver.findElement(By.xpath("/html/body/div[2]/div[1]/div[1]/h3")).getText();
Assert.assertEquals(headertex, "Log On ");}
catch (NoSuchElementException e){
throw new AssertionError("Error in loading URL", e);
}
driver.manage().window().maximize();
RIMethods obj3 =new RIMethods(driver, config);
obj3.Login();
//Verify last 4 digits of SSN
String sn = driver.findElement(By.name("eeSsn3")).getAttribute("value");
int socialsecurity = Integer.parseInt(sn);
s_assert.assertEquals(socialsecurity, SSN, "SSN last four digits did not match");
//verify the header for Future benefits
if(driver.findElement(By.xpath("/html/body/div[4]/div[2]/div[1]/div[1]")).getText().contains("Future Benefits Summary"))
System.out.println("Future Benefits summary header is correct");
else
System.out.println("header is incorrect");
driver.manage().timeouts().implicitlyWait(3, TimeUnit.SECONDS);
s_assert.assertAll();
}
#AfterMethod
public void cleanUp(){
driver.quit();
}
Ahh, its been a full day struggling to get this one and the solution is too simple.
To let it work, I created the softassert object inside the Test method instead of defining at class level, not sure how it affects but its working fine now.

How To update google-cloud-dataflow running in app engine without clearing bigquery tables

I have a google-cloud-dataflow process running on the App-engine.
It listens to messages sent via pubsub and streams to big-query.
I updated my code and I am trying to rerun the app.
But I receive this error:
Exception in thread "main" java.lang.IllegalArgumentException: BigQuery table is not empty
Is there anyway to update data flow without deleting the table?
Since my code might change quite often, and I do not want to delete data in the table.
Here is my code:
public class MyPipline {
private static final Logger LOG = LoggerFactory.getLogger(BotPipline.class);
private static String name;
public static void main(String[] args) {
List<TableFieldSchema> fields = new ArrayList<>();
fields.add(new TableFieldSchema().setName("a").setType("string"));
fields.add(new TableFieldSchema().setName("b").setType("string"));
fields.add(new TableFieldSchema().setName("c").setType("string"));
TableSchema tableSchema = new TableSchema().setFields(fields);
DataflowPipelineOptions options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
options.setRunner(BlockingDataflowPipelineRunner.class);
options.setProject("my-data-analysis");
options.setStagingLocation("gs://my-bucket/dataflow-jars");
options.setStreaming(true);
Pipeline pipeline = Pipeline.create(options);
PCollection<String> input = pipeline
.apply(PubsubIO.Read.subscription(
"projects/my-data-analysis/subscriptions/myDataflowSub"));
input.apply(ParDo.of(new DoFn<String, Void>() {
#Override
public void processElement(DoFn<String, Void>.ProcessContext c) throws Exception {
LOG.info("json" + c.element());
}
}));
String fileName = UUID.randomUUID().toString().replaceAll("-", "");
input.apply(ParDo.of(new DoFn<String, String>() {
#Override
public void processElement(DoFn<String, String>.ProcessContext c) throws Exception {
JSONObject firstJSONObject = new JSONObject(c.element());
firstJSONObject.put("a", firstJSONObject.get("a").toString()+ "1000");
c.output(firstJSONObject.toString());
}
}).named("update json")).apply(ParDo.of(new DoFn<String, TableRow>() {
#Override
public void processElement(DoFn<String, TableRow>.ProcessContext c) throws Exception {
JSONObject json = new JSONObject(c.element());
TableRow row = new TableRow().set("a", json.get("a")).set("b", json.get("b")).set("c", json.get("c"));
c.output(row);
}
}).named("convert json to table row"))
.apply(BigQueryIO.Write.to("my-data-analysis:mydataset.mytable").withSchema(tableSchema)
);
pipeline.run();
}
}
You need to specify withWriteDisposition on your BigQueryIO.Write - see documentation of the method and of its argument. Depending on your requirements, you need either WRITE_TRUNCATE or WRITE_APPEND.

Hadoop RPC server doesn't stop

I was trying to create a simple parent child process with IPC between them using Hadoop IPC. It turns out that program executes and prints the results but it doesn't exit. Here is the code for it.
interface Protocol extends VersionedProtocol{
public static final long versionID = 1L;
IntWritable getInput();
}
public final class JavaProcess implements Protocol{
Server server;
public JavaProcess() {
String rpcAddr = "localhost";
int rpcPort = 8989;
Configuration conf = new Configuration();
try {
server = RPC.getServer(this, rpcAddr, rpcPort, conf);
server.start();
} catch (IOException e) {
e.printStackTrace();
}
}
public int exec(Class klass) throws IOException,InterruptedException {
String javaHome = System.getProperty("java.home");
String javaBin = javaHome +
File.separator + "bin" +
File.separator + "java";
String classpath = System.getProperty("java.class.path");
String className = klass.getCanonicalName();
ProcessBuilder builder = new ProcessBuilder(
javaBin, "-cp", classpath, className);
Process process = builder.start();
int exit_code = process.waitFor();
server.stop();
System.out.println("completed process");
return exit_code;
}
public static void main(String...args) throws IOException, InterruptedException{
int status = new JavaProcess().exec(JavaProcessChild.class);
System.out.println(status);
}
#Override
public IntWritable getInput() {
return new IntWritable(10);
}
#Override
public long getProtocolVersion(String paramString, long paramLong)
throws IOException {
return Protocol.versionID;
}
}
Here is the child process class. However I have realized that it is due to RPC.getServer() on the server side that it the culprit. Is it some known hadoop bug, or I am missing something?
public class JavaProcessChild{
public static void main(String...args){
Protocol umbilical = null;
try {
Configuration defaultConf = new Configuration();
InetSocketAddress addr = new InetSocketAddress("localhost", 8989);
umbilical = (Protocol) RPC.waitForProxy(Protocol.class, Protocol.versionID,
addr, defaultConf);
IntWritable input = umbilical.getInput();
JavaProcessChild my = new JavaProcessChild();
if(input!=null && input.equals(new IntWritable(10))){
Thread.sleep(10000);
}
else{
Thread.sleep(1000);
}
} catch (Throwable e) {
e.printStackTrace();
} finally{
if(umbilical != null){
RPC.stopProxy(umbilical);
}
}
}
}
We sorted that out via mail. But I just want to give my two cents here for the public:
So the thread that is not dying there (thus not letting the main thread finish) is the org.apache.hadoop.ipc.Server$Reader.
The reason is, that the implementation of readSelector.select(); is not interruptable. If you look closely in a debugger or threaddump, it is waiting on that call forever, even if the main thread is already cleaned up.
Two possible fixes:
make the reader thread a deamon (not so cool, because the selector
won't be cleaned up properly, but the process will end)
explicitly close the "readSelector" from outside when interrupting the threadpool
However, this is a bug in Hadoop and I have no time to look through the JIRAs. Maybe this is already fixed, in YARN the old IPC is replaced by protobuf and thrift anyways.
BTW also this is platform dependend on the implementation of the selectors, I observed these zombies on debian/windows systems, but not on redhat/solaris.
If anyone is interested in a patch for Hadoop 1.0, email me. I will sort out the JIRA bug in the near future and edit this here with more information. (Maybe this is fixed in the meanwhile anyways).

org.apache.commons.io.FileCleaningTracker does not delete temp files unless explicitly calling System.gc()?

I am working on a upload image feature for my web app, and am having a strange issue with the "FileCleaningTracker" from apache commons fileupload. I have a ImageUploadService with a instance variable FileCleaningTracker, then I have a upload method that creates an instance of DiskFileItemFactory and then references the FileCleaningTracker, after the upload method completes successfully, I set the FileCleaningTracker of DiskFileItemFactory to null, so i would expect the DiskFileItemFactory to be garbage collected and then the underlying subclass of PhantomReference in FileCleaningTracker will be notified hence delete the temp file the DiskFileItemFactory created.
But that does not happen until I null the DiskFileItemFactory and call System.gc() (only nulling the DiskFileItemFactory does not help) at the end of the upload method. THis seems very strange to me. Here is my code :
#Override
public void upload(final HttpServletRequest request) {
ValidateUtils.checkNotNull(request, "upload request");
final File tmp = new File(this.tempFolder);
if (!tmp.exists()) {
tmp.mkdir();
}
DiskFileItemFactory fileItemFactory = new DiskFileItemFactory(this.sizeThreshold, tmp);
fileItemFactory.setFileCleaningTracker(this.fileCleaningTracker);
ServletFileUpload uploadHandler = new ServletFileUpload(fileItemFactory);
List items;
try {
items = uploadHandler.parseRequest(request);
} catch (final FileUploadException e) {
throw new ImageUploadServiceException("Error parsing the http servlet request for image upload.", e);
}
final Iterator it = items.iterator();
while (it.hasNext()) {
final DiskFileItem item = (DiskFileItem) it.next();
if (item.isFormField()) {
// log message
} else {
final String fileName = item.getName();
final File destination = this.createFileForUpload(fileName, this.uploadFolder);
FileChannel outChannel;
try {
outChannel = new FileOutputStream(destination).getChannel();
} catch (final FileNotFoundException e) {
throw new ImageUploadServiceException(e);
}
FileChannel inChannel = null;
try {
inChannel = new FileInputStream(item.getStoreLocation()).getChannel();
outChannel.transferFrom(inChannel, 0, item.getSize());
} catch (final IOException e) {
throw new ImageUploadServiceException(String.format("Error uploading image to '%s/%s'.", this.uploadFolder, destination.getName()), e);
} finally {
IOUtils.closeChannel(inChannel);
IOUtils.closeChannel(outChannel);
}
}
}
fileItemFactory.setFileCleaningTracker(null);
}
The above code causes every upload creates a file in the temp folder but does not remove it at the end by the "fileCleaningTracker", possibly because the DiskFileItemFactory instance is not garbage collected(I've failed to see why it shouldn't have) or it has been GCed but not notified by the PhantomReference in fileCleaningTracker(how reliable is PhantomReference?)
I waited 10 minutes and the files are still there, so it should't be because the GC has not run. and there are no exceptions.
Now if I add the following code, the temp files are removed every time after the upload:
fileItemFactory = null;
System.gc();
This looks very strange to me as I would expect the fileItemFactory be GCed without an explict call to System.gc().
Any input will be appreciated.
Thank you.
I have the same problem. The temporary files are never removed even after the server shutdown: GC process had not been started so FileCleaningTracker had no chance to get tracked files to delete from ReferenceQueue and all the files remain on the hard drive.
Due to specific behavior of my application I have to clean up after each upload (files might be very big). Instead of using standard org.apache.commons.io.FileCleaningTracker I am feeling lucky to override this class with my own implementation:
/**
* Cleaning tracker to clean files after each upload with special method invocation.
* Not thread safe and must be used with 1 factory = 1 thread policy.
*/
public class DeleteFilesOnEndUploadCleaningTracker extends FileCleaningTracker {
private List<String> filesToDelete = new ArrayList();
public void deleteTemporaryFiles() {
for (String file : filesToDelete) {
new File(file).delete();
}
filesToDelete.clear();
}
#Override
public synchronized void exitWhenFinished() {
deleteTemporaryFiles();
}
#Override
public int getTrackCount() {
return filesToDelete.size();
}
#Override
public void track(File file, Object marker) {
filesToDelete.add(file.getAbsolutePath());
}
#Override
public void track(File file, Object marker, FileDeleteStrategy deleteStrategy) {
filesToDelete.add(file.getAbsolutePath());
}
#Override
public void track(String path, Object marker) {
filesToDelete.add(path);
}
#Override
public void track(String path, Object marker, FileDeleteStrategy deleteStrategy) {
filesToDelete.add(path);
}
}
If this the right case for you just inject the instance of the class above into your DiskFileItemFactory:
DeleteFilesOnEndUploadCleaningTracker tracker = new DeleteFilesOnEndUploadCleaningTracker();
fileItemFactory.setFileCleaningTracker(tracker);
And don't forget to invoke the cleaning method after your work with uploaded items is done:
tracker.deleteTemporaryFiles();
Forgot to mention: I use commons-fileupload version 1.2.2 and commons-io version 1.3.2.