Is it possible to continue with task C after A and B run to completion without fault or cancellation using a single TPL method? - .net-4.0

I've tried to use Task.Factory.ContinueWhenAll() a few times now with the intent of invoking a continuation only when all the antecedents run to completion without any errors or cancellations. Doing so causes an ArgumentOutOfRangeException to be thrown with the message,
It is invalid to exclude specific continuation kinds for continuations off of multiple tasks. Parameter name: continuationOptions
For example, the code
var first = Task.Factory.StartNew<MyResult>(
DoSomething,
firstInfo,
tokenSource.Token);
var second = Task.Factory.StartNew<MyResult>(
DoSomethingElse,
mystate,
tokenSource.Token);
var third = Task.Factory.ContinueWhenAll(
new[] { first, second },
DoSomethingNowThatFirstAndSecondAreDone,
tokenSource.Token,
TaskContinuationOptions.OnlyOnRanToCompletion, // not allowed!
TaskScheduler.FromCurrentSynchronizationContext());
is not acceptable to the TPL. Is there a way to do something like this using some other TPL method?

There doesn't appear to be a direct way to do this. I've gotten around this by changing OnlyOnRanToCompletion to None and checking to see if Exception is non-null for each task passed into the continuation. Something like
private void DoSomethingNowThatFirstAndSecondAreDone(Task<MyResult>[] requestTasks)
{
if (requestTasks.Any(t => t.Exception != null))
return;
// otherwise proceed...
}
works, but this doesn't seem to be a very satisfying way to handle the case with multiple antecedents and breaks with the pattern the single-case Task.Factory.ContinueWith uses.

Related

Using require() later in code and should one handle any exceptions thrown thereby

I have a kotlin class with a method
loadElements(e: Iterable<Int>) {
}
This then constructs a new copy of that Iterable as an ArrayList<Int> within the object.
It is a requirement that all the elements in that ArrayList<Int> be non-negative. It is considered a breach of contract by the caller if that is not met. I've been led to believe that "breach of contract" is something to be tested by require(), whereas check() is for testing logic internal to that method. Is this correct ?
All the examples I have seen, have the require() as the very first lines of code within the method. Is it, however, acceptable to run require() in a loop, like this ?
public fun loadElements(e: Iterable<Int>) {
elementArray.clear()
e.forEach {
require(it>=0)
elementArray.add(it)
moduleCount += it
}
if (elementCount %2 == 1)
elementArray.add(0)
check(elementCount %2 == 0)
computeInternalSizes()
}
Thing is, this means that part of the object's internals may already be set-up by the time the require() breach is detected: i.e., moduleCount will be wrong and computeInternalSizes() will never get called.
Now, of course I could just use a separate pass, with the first one checking for the require() condition, and then doing all the real work thereafter. This would mean that if the input came in as a Sequence<Int>, it would be forced to be terminal and multi-iterable.
If the require() throws, I would like to assume that the program cannot continue because a design error has occurred somewhere. However, if someone traps the resultant exception, and continues, I will end-up with an incoherent object state.
What is best practice for handling conditions where incoming parameter breaches won't be noticed until some significant unrewindable work has been done ?
I tried using a separate pass for checking for non-negativity. This worked perfectly well but, given that it could be coming from a Sequence or similar, I don't want to have to build the whole sequence, and then traverse that sequence again.
I tried using check(). This works, but it just shows up as an inconsistency in object state, rather than flagging up the incoming parameter validation, which is making a breach of contract look like an internal design fault, and just delaying the inevitable.
I've tried putting try/catch/finally all over the place, but this is an excessive amount of code for such a simple thing.
I'm not even sure if a program should attempt recovery if a require() fails.
In general you avoid situations like this, by reducing the scope of mutability in your code.
The difference between require and check is mostly a convention. They throw different Exceptions, namely IllegalArgumentException and IllegalStateException respectively. As the type of the Exceptions suggest, former is suited for validating the (user) input to a method whereas the latter is designed to check intermediate states during the runtime.
Exceptions in Kotlin should be handled as such, being an Exception that should not occur regularly. See also the Kotlin documentation why there are no checked exceptions in Kotlin.
You did not write the name of your surrounding Kotlin class, thus I'll call it Foo for the time being.
Rather than providing a function on Foo, that mutates the internal state of Foo, you could create new instances of Foo based on the Iterable<Int> / Sequence<Int>. This way, you only ever have an Foo object when its in a valid state.
private class Foo(source: Iterable<Int>) {
private val elementArray = ArrayList<Int>()
private val moduleCount: Int
init {
var internalCount = 0
for (count in source) {
require(count > 0)
elementArray.add(count)
internalCount += count
}
moduleCount = internalCount
if (elementArray.size % 2 == 1) {
elementArray.add(0)
}
check(elementArray.size % 2 == 0)
// ...
}
}
Alternatively, if you want / need to keep the interface as described in your question but also avoid the invalid state, you could make use of an intermediate copy.
As you're copying the incoming Iterable<Int> / Sequence<Int> into an ArrayList<Int> I assume you're not working with very large collections.
private class Foo(source: Iterable<Int>) {
private val elementArray = ArrayList<Int>()
private var moduleCount = 0
public fun loadElements(source: Iterable<Int>) {
val internalCopy = ArrayList<Int>()
for (count in source) {
require(count >= 0)
internalCopy.add(count)
}
elementArray.clear()
for (count in internalCopy) {
elementArray.add(count)
moduleCount += count
}
if (elementArray.size % 2 == 1) {
elementArray.add(0)
}
check(elementArray.size % 2 == 0)
// ...
}
}

How to repeat Mono while not empty

I have a method which returns like this!
Mono<Integer> getNumberFromSomewhere();
I need to keep calling this until it has no more items to emit. That is I need to make this as Flux<Integer>.
One option is to add repeat. the point is - I want to stop when the above method emits the first empty signal.
Is there any way to do this? I am looking for a clean way.
A built-in operator that does that (although it is intended for "deeper" nesting) is expand.
expand naturally stops expansion when the returned Publisher completes empty.
You could apply it to your use-case like this:
//this changes each time one subscribes to it
Mono<Integer> monoWithUnderlyingState;
Flux<Integer> repeated = monoWithUnderlyingState
.expand(i -> monoWithUnderlyingState);
I'm not aware of a built-in operator which would do the job straightaway. However, it can be done using a wrapper class and a mix of operators:
Flux<Integer> repeatUntilEmpty() {
return getNumberFromSomewhere()
.map(ResultWrapper::new)
.defaultIfEmpty(ResultWrapper.EMPTY)
.repeat()
.takeWhile(ResultWrapper::isNotEmpty)
}
// helper class, not necessarily needs to be Java record
record ResultWrapper(Integer value) {
public static final ResultWrapper EMPTY = new ResultWrapper(null);
public boolean isNotEmpty() {
return value != null;
}
}

Async Wait Efficient Execution

I need to iterate 100's of ids in parallel and collect the result in list. I am trying to do it in following way
val context = newFixedThreadPoolContext(5, "custom pool")
val list = mutableListOf<String>()
ids.map {
val result:Deferred<String> = async(context) {
getResult(it)
}
//list.add(result.await()
}.mapNotNull(result -> list.add(result.await())
I am getting error at
mapNotNull(result -> list.add(result.await())
as await method is not available. Why await is not applicable at this place? Instead commented line
//list.add(result.await()
is working fine.
What is the best way to run this block in parallel using coroutine with custom thread pool?
Generally, you go in the right direction: you need to create a list of Deferred and then await() on them.
If this is exactly the code you are using then you did not return anything from your first map { } block, so you don't get a List<Deferred> as you expect, but List<Unit> (list of nothing). Just remove val result:Deferred<String> = - this way you won't assign result to a variable, but return it from the lambda. Also, there are two syntactic errors in the last line: you used () instead of {} and there is a missing closing parenthesis.
After these changes I believe your code will work, but still, it is pretty weird. You seem to mix two distinct approaches to transform a collection into another. One is using higher-order functions like map() and another is using a loop and adding to a list. You use both of them at the same time. I think the following code should do exactly what you need (thanks #Joffrey for improving it):
val list = ids.map {
async(context) {
getResult(it)
}
}.awaitAll().filterNotNull()

Kotlin Kovenant returns the same object for all promises

I'm trying to use Kotlin Kovenant because I want a promise-based solution to track my retrofit calls.
What I did first was this:
all (
walkingRoutePromise,
drivingRoutePromise
) success { responses ->
//Do stuff with the list of responses
}
where the promises I pass are those that are resolved at the completion of my retrofit calls. However "responses" is a list of two identical objects. When debugging, I can confirm that two different objects with different values are being passed to the respective resolve methods. However kovenant returns two identical objects (same location in memory)
My next attempt was this:
task {
walkingRoutePromise
} then {
var returnval = it.get()
walkingDTO = returnval.deepCopy()
drivingRoutePromise
} success {
val returnval = it.get()
drivingDTO = returnval.deepCopy()
mapRoutes = MapRoutes(walkingDTO!!, drivingDTO!!)
currentRoute = mapRoutes!!.walking
callback()
}
Where I tried to do the calls one at a time and perform deep copies of the results. This worked for the first response, but then I found that it.get() in the success block - the success block of the second call - is the same unchanged object that I get from it.get() in the "then" block. It seems Kovenant is implemented to use one object for all of its resolutions, but after you resolve once, the single object it uses for the resolutions cannot be changed. What am I supposed to do if I want to access unique values from promise.resolve(object)? Seems like a very broken system.

AspectJ : can I neutralize 'throw' (replace it with log) and continue the method

In below code I want to neutralize the throw and continue the method - Can it be done ?
public class TestChild extends TestParent{
private String s;
public void doit(String arg) throws Exception {
if(arg == null) {
Exception e = new Exception("exception");
throw e;
}
s=arg;
}
}
The net result should be that, in case of the exception triggered (arg == null)
throw e is replaced by Log(e)
s=arg is executed
Thanks
PS : I can 'swallow' the exception or replace it with another exception but in all cases the method does not continue, all my interventions take place when the harm is done (ie the exception has been thrown)
I strongly doubt that general solution exists. But for your particular code and requirements 1 and 2:
privileged public aspect SkipNullBlockAspect {
public pointcut needSkip(TestChild t1, String a1): execution(void TestChild.doit(String))
&& this(t1) && args(a1) ;
void around(TestChild t1, String a1): needSkip(t1, a1){
if(a1==null) //if argument is null - doing hack.
{
a1=""; //alter argument to skip if block.
proceed(t1, a1);
t1.s=null;
a1=null; //restore argument
System.out.println("Little hack.");
}
else
proceed(t1, a1);
}
}
I think that generally what you want makes no sense most cases because if an application throws an exception it has a reason to do so, and that reason almost always includes the intention not to continue with the normal control flow of the method where the exception was thrown due to possible subsequent errors caused by bogus data. For example, what if you could neutralise the throw in your code and the next lines of code would do something like this:
if(arg == null)
throw new Exception("exception");
// We magically neutralise the exception and are here with arg == null
arg.someMethod(); // NullPointerException
double x = 11.0 / Integer.parseInt(arg); // NumberFormatException
anotherMethod(arg); // might throw exception if arg == null
Do you get my point? You take incalculable risks by continuing control flow here, assuming you can at all. Now what are the alternatives?
Let us assume you know exactly that a value of null does not do any harm here. Then why not just catch the exception with an after() throwing advice?
Or if null is harmful and you know about it, why not intercept method execution and overwrite the parameter so as to avoid the exception to begin with?
Speculatively assuming that the method content is a black box to you and you are trying to do some hacky things here, you can use an around() advice and from there call proceed() multiple times with different argument values (e.g. some authentication token or password) until the called method does not throw an exception anymore.
As you see, there are many ways to solve your practical problem depending on what exactly the problem is and what you want to achieve.
Having said all this, now let us return to your initial technical question of not catching, but actually neutralising an exception, i.e. somehow avoiding its being thrown at all. Because the AspectJ language does not contain technical means to do what you want (thank God!), you can look at other tools which can manipulate Java class files in a more low-level fashion. I have never used them productively, but I am pretty sure that you can do what you want using BCEL or Javassist.