Excuse all the Console.WriteLines! Trying to figure out what is happening here.
The following code works if I run it in visual studio. When I compile and run it as a command line program - the line that tries to access an API using HttpClient completely stops the whole process and ends the program. No error handling, nothing. How can this happen? It bombs out if I remove the try/ with block as well. I am baffled.
let getTransactionData(ewayCSVData: EwayCSVData, httpClient: HttpClient) = task{
try
Console.WriteLine("get transaction data 1")
if ewayCSVData.transactionType.ToLower() = "refund" then
Console.WriteLine("get transaction data 2")
let url = "https://api.ewaypayments.com/Transaction/"+ewayCSVData.transactionNumber.ToString() + "/Refund"
Console.WriteLine("get transaction data 3")
let! postResult = httpClient.PostAsync(url, null)
Console.WriteLine("get transaction data 4")
let! result = postResult.Content.ReadAsStringAsync()
Console.WriteLine("get transaction data 5")
return result
else
Console.WriteLine("get transaction data 6")
let url: string = "https://api.ewaypayments.com/Transaction/"+ewayCSVData.transactionNumber
let! result = httpClient.GetStringAsync(url) // This line kills the whole process
Console.WriteLine("get transaction data 7")
return result
with (ex) ->
Console.Write(ex.Message)
Console.WriteLine("get transaction data 8")
return ""
}
I suspect your command-line program is simply exiting because it has reached the end of its code. You're launching an asynchronous task to access the API, but your program doesn't wait for this task to complete before exiting. That's why there are no exceptions or error messages.
To fix this, you can either explicitly wait on the result (e.g. via Task.Result), or put something at the end of your program to prevent the main thread from exiting. One common way to do this is by calling Console.ReadLine() on the last line of your top-level function.
For example, the following program (usually) exits without writing anything to the console:
open System
open System.Threading.Tasks
Task.Run<unit>(fun () ->
Console.WriteLine("Hello world"))
|> ignore
You can fix this by calling Result:
let task = Task.Run<unit>(fun () ->
Console.WriteLine("Hello world"))
task.Result
Or by calling Console.ReadLine:
Task.Run<unit>(fun () ->
Console.WriteLine("Hello world"))
|> ignore
Console.ReadLine() |> ignore
Note that my examples use Task.Run to illustrate the point. I don't think the F# task builder behaves quite the same way, but it sounds similar enough from your description.
Related
I'm actually confused on assembly time and subscription time. I know mono's are lazy and does not get executed until its subscribed. Below is a method.
public Mono<UserbaseEntityResponse> getUserbaseDetailsForEntityId(String id) {
GroupRequest request = ImmutableGroupRequest
.builder()
.cloudId(id)
.build();
Mono<List<GroupResponse>> response = ussClient.getGroups(request);
List<UserbaseEntityResponse.GroupPrincipal> groups = new CopyOnWriteArrayList<>();
response.flatMapIterable(elem -> elem)
.toIterable().iterator().forEachRemaining(
groupResponse -> {
groupResponse.getResources().iterator().forEachRemaining(
resource -> {
groups.add(ImmutableGroupPrincipal
.builder()
.groupId(resource.getId())
.name(resource.getDisplayName())
.addAllUsers(convertMemebersToUsers(resource))
.build());
}
);
}
);
log.debug("Response Object - " + groups.toString());
ImmutableUserbaseEntityResponse res = ImmutableUserbaseEntityResponse
.builder()
.userbaseId(id)
.addAllGroups(groups)
.build();
Flux<UserbaseEntityResponse.GroupPrincipal> f = Flux.fromIterable(res.getGroups())
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(groupPrincipal -> getResourcesForGroup((ImmutableGroupPrincipal)groupPrincipal, res.getUserbaseId()))
.sequential();
return Mono.just(res);
}
This gets executed Mono<List<GroupResponse>> response = ussClient.getGroups(request); without calling subscribe, however below will not get executed unless I call subscribe on that.
Flux<UserbaseEntityResponse.GroupPrincipal> f = Flux.fromIterable(res.getGroups())
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(groupPrincipal -> getResourcesForGroup((ImmutableGroupPrincipal)groupPrincipal, res.getUserbaseId()))
.sequential();
Can I get some more input on assembly time vs subscription?
"Nothing happens until you subscribe" isn't quite true in all cases. There's three scenarios in which a publisher (Mono or Flux) will be executed:
You subscribe;
You block;
The publisher is "hot".
Note that the above scenarios all apply to an entire reactive chain - i.e. if I subscribe to a publisher, everything upstream (dependent on that publisher) also executes. That's why frameworks can, and should call subscribe when they need to, causing the reactive chain defined in a controller to execute.
In your case it's actually the second of these - you're blocking, which is essentially a "subscribe and wait for the result(s)". Usually the methods that block are clearly labelled, but again that's not always the case - in your case it's the toIterable() method on Flux doing the blocking:
Transform this Flux into a lazy Iterable blocking on Iterator.next() calls.
But ah, you say, I'm not calling Iterator.next() - what gives?!
Well, implicitly you are by calling forEachRemaining():
The default implementation behaves as if:
while (hasNext())
action.accept(next());
...and as per the above rule, since ussClient.getGroups(request) is upstream of this blocking call, it gets executed.
I am trying to call external service in a micro-service application to get all responses in parallel and combine them before starting the other computation. I know i can use block() call on each Mono object but that will defeat the purpose of using reactive api. is it possible to fire up all requests in parallel and combine them at one point.
Sample code is as below. In this case "Done" prints before actual response comes up. I also know that subscribe call is non blocking.
I want "Done" to be printed after all responses has been collected, so need some kind of blocking. however do not want to block each and every request
final List<Mono<String>> responseOne = new ArrayList<>();
IntStream.range(0, 10).forEach(i -> {
Mono<String> responseMono =
WebClient.create("https://jsonplaceholder.typicode.com/posts")
.post()
.retrieve()
.bodyToMono(String.class)
;
System.out.println("create mono response lazy initialization");
responseOne.add(responseMono);
});
Flux.merge(responseOne).collectList().subscribe( res -> {
System.out.println(res);
});
System.out.println("Done");
Based on the suggestion, I came up with this which seems to work for me.
StopWatch watch = new StopWatch();
watch.start();
final List<Mono<String>> responseOne = new ArrayList<>();
IntStream.range(0, 10).forEach(i -> {
Mono<String> responseMono =
WebClient.create("https://jsonplaceholder.typicode.com/posts")
.post()
.retrieve()
.bodyToMono(String.class);
System.out.println("create mono response lazy initialization");
responseOne.add(responseMono);
});
CompletableFuture<List<String>> futureCount = new CompletableFuture<>();
List<String> res = new ArrayList<>();
Mono.zip(responseOne, Arrays::asList)
.flatMapIterable(objects -> objects) // make flux of objects
.doOnComplete(() -> {
futureCount.complete(res);
}) // will be printed on completion of the flux created above
.subscribe(responseString -> {
res.add((String) responseString);
}
);
watch.stop();
List<String> response = futureCount.get();
System.out.println(response);
// do rest of the computation
System.out.println(watch.getLastTaskTimeMillis());
If you want your calls to be parallel it is a good idea to use Mono.zip
Now, you want Done to be printed after the collection of all the responses
So, you can modify your code as below
final List<Mono<String>> responseMonos = IntStream.range(0, 10).mapToObj(
index -> WebClient.create("https://jsonplaceholder.typicode.com/posts").post().retrieve()
.bodyToMono(String.class)).collect(Collectors.toList()); // create iterable of mono of network calls
Mono.zip(responseMonos, Arrays::asList) // make parallel network calls and collect it to a list
.flatMapIterable(objects -> objects) // make flux of objects
.doOnComplete(() -> System.out.println("Done")) // will be printed on completion of the flux created above
.subscribe(responseString -> System.out.println("responseString = " + responseString)); // subscribe and start emitting values from flux
It's also not a good idea to call subscribe or block explicitly in your reactive code.
is it possible to fire up all requests in parallel and combine them at one point.
That's exactly what your code is doing already. If you don't believe me, stick .delayElement(Duration.ofSeconds(2)) after your bodyToMono() call. You'll see that your list prints out after just over 2 seconds, rather than 20 (which is what it would be if executing sequentially 10 times.)
The combining part is happening in your Flux.merge().collectList() call.
In this case "Done" prints before actual response comes up.
That's to be expected, as your last System.out.println() call is executing outside of the reactive callback chain. If you want "Done" to print after your list is printed (which you've confusingly given the variable name s in the consumer passed to your subscribe() call) then you'll need to put it inside that consumer, not outside it.
If you're interfacing with an imperative API, and you therefore need to block on the list, you can just do:
List<String> list = Flux.merge(responseOne).collectList().block();
...which will still execute the calls in parallel (so still gain you some advantage), but then block until all of them are complete and combined into a list. (If you're just using reactor for this type of usage however, it's debatable if it's worthwhile.)
I have an odd behavior about running processes from F#. My basic problem was to run Graphviz's dot.exe with a generated graph, and visualize it.
When I restricted the graph to be small, everything worked fine. But with bigger graphs, it hanged on a specific line. I'm curious why is it happening, so maybe I can fix my issue. Tho I created an MVCE for the purpose.
I have 2 console programs. One is the simulator of dot.exe, where I would showel the input, and expect .jpg from. This version just streams 10 times the input to the output in blocks:
// Learn more about F# at http://fsharp.org
// See the 'F# Tutorial' project for more help.
open System
open System.IO
[<EntryPoint>]
let main argv =
let bufferSize = 4096
let mutable buffer : char [] = Array.zeroCreate bufferSize
// it repeats in 4096 blocks, but it doesn't matter. It's a simulation of outputing 10 times amount output.
while Console.In.ReadBlock(buffer, 0, bufferSize) <> 0 do
for i in 1 .. 10 do
Console.Out.WriteLine(buffer)
0 // return an integer exit code
So I have an .exe named C:\Users\MyUserName\Documents\Visual Studio 2015\Projects\ProcessInputOutputMVCE\EchoExe\bin\Debug\EchoExe.exe Than comes another console project, which uses it:
// Learn more about F# at http://fsharp.org
// See the 'F# Tutorial' project for more help.
open System.IO
[<EntryPoint>]
let main argv =
let si =
new System.Diagnostics.ProcessStartInfo(#"C:\Users\MyUserName\Documents\Visual Studio 2015\Projects\ProcessInputOutputMVCE\EchoExe\bin\Debug\EchoExe.exe", "",
// from Fake's Process handling
#if FX_WINDOWSTLE
WindowStyle = ProcessWindowStyle.Hidden,
#else
CreateNoWindow = true,
#endif
UseShellExecute = false,
RedirectStandardOutput = true,
RedirectStandardError = true,
RedirectStandardInput = true)
use p = new System.Diagnostics.Process()
p.StartInfo <- si
if p.Start() then
let input =
Seq.replicate 3000 "aaaa"
|> String.concat "\n"
p.StandardInput.Write input
// hangs on Flush()
p.StandardInput.Flush()
p.StandardInput.Close()
use outTxt = File.Create "out.txt"
p.StandardOutput.BaseStream.CopyTo outTxt
// double WaitForExit because of https://msdn.microsoft.com/en-us/library/system.diagnostics.process.standardoutput(v=vs.110).aspx
// saying first WaitForExit() waits for StandardOutput. Next is needed for the whole process.
p.WaitForExit()
p.WaitForExit()
0 // return an integer exit code
Which hangs on p.StandardInput.Flush(). Except if I change the volume of input to Seq.replicate 300 "aaaa". Why is it working differently?
Process.StandardOutput's reference states that reading from StandardOutput and child process writing to that stream in the same time might cause deadlock. Who is a child process in that case? Is it my p.StandardInput.Write input?
Other possible deadlock would be
read all text from both the standard output and standard error streams.
But I'm not reading the error stream. Anyway it suggests to handle the input/output with async so I have a following version:
// same as before
...
if p.Start() then
let rec writeIndefinitely (rows: string list) =
async {
if rows.Length = 0 then
()
else
do! Async.AwaitTask (p.StandardInput.WriteLineAsync rows.Head)
p.StandardInput.Flush()
do! writeIndefinitely rows.Tail
}
let inputTaskContinuation =
Seq.replicate 3000 "aaaa"
|> Seq.toList
|> writeIndefinitely
Async.Start (async {
do! inputTaskContinuation
p.StandardInput.Close()
}
)
let bufferSize = 4096
let mutable buffer : char array = Array.zeroCreate bufferSize
use outTxt = File.CreateText "out.txt"
let rec readIndefinitely() =
async {
let! readBytes = Async.AwaitTask (p.StandardOutput.ReadAsync (buffer, 0, bufferSize))
if readBytes <> 0 then
outTxt.Write (buffer, 0, buffer.Length)
outTxt.Flush()
do! readIndefinitely()
}
Async.Start (async {
do! readIndefinitely()
p.StandardOutput.Close()
})
// with that it throws "Cannot mix synchronous and asynchronous operation on process stream." on let! readBytes = Async.AwaitTask (p.StandardOutput.ReadAsync (buffer, 0, bufferSize))
//p.BeginOutputReadLine()
p.WaitForExit()
// using dot.exe, it hangs on the second WaitForExit()
p.WaitForExit()
Which doesn't hang, and writes out.txt. Except in the real code using dot.exe. It's as async as it gets. Why does it throw exception for p.BeginOutputReadLine()?
After some experimenting, the whole async out can stay p.StandardOutput.BaseStream.CopyTo outTxt where outTxt is File.Create not File.CreateText. Only the async input behaves correct against the synchronous input handling. Which is weird.
To sum it up. If I have asynchronous input handling, it works fine (except dot.exe, but if I figure it out, maybe I can fix that too), if it have synchronous input handling, it's depending on the size of the input. (300 works, 3000 doesn't) Why is that?
Update
Since I really not needed to redirect the standard error, I have removed RedirectStandardError = true. That solved the mysterious dot.exe problem.
I think that the deadlock here is following:
Host process writes too much data to the input buffer.
Child process reads from the buffer and writes to output.
Host process does not read from the buffer (it happens after sending all the data). When the output buffer of the child process is filled up, it blocks on writing and stops reading from the input. Two processes are now in the deadlock state.
It is not necessary to use completely asynchronous code. What I think would work is writing in chunks to dot's stdin and reading to the end of the dot's stdout before writing again.
How can I create batch processing application with Apache Apex?
All the examples I've found were streaming applications, which means they are not ending and I would like my app to close once it has processed all the data.
Thanks
What is your use-case? Supporting batch natively is on the roadmap and is being worked on right now.
Alternately, till then, once you are sure that your processing is done, the input operator can send a signal as ShutdownException() and that will propogate through the DAG and shutdown the DAG.
Let us know if you need further details.
You can add an exit condition before running the app.
for example
public void testMapOperator() throws Exception
{
LocalMode lma = LocalMode.newInstance();
DAG dag = lma.getDAG();
NumberGenerator numGen = dag.addOperator("numGen", new NumberGenerator());
FunctionOperator.MapFunctionOperator<Integer, Integer> mapper
= dag.addOperator("mapper", new FunctionOperator.MapFunctionOperator<Integer, Integer>(new Square()));
ResultCollector collector = dag.addOperator("collector", new ResultCollector());
dag.addStream("raw numbers", numGen.output, mapper.input);
dag.addStream("mapped results", mapper.output, collector.input);
// Create local cluster
LocalMode.Controller lc = lma.getController();
lc.setHeartbeatMonitoringEnabled(false);
//Condition to exit the application
((StramLocalCluster)lc).setExitCondition(new Callable<Boolean>()
{
#Override
public Boolean call() throws Exception
{
return TupleCount == NumTuples;
}
});
lc.run();
Assert.assertEquals(sum, 285);
}
for the complete code refer https://github.com/apache/apex-malhar/blob/master/stream/src/test/java/org/apache/apex/malhar/stream/FunctionOperator/FunctionOperatorTest.java
I need to run, lets say, 20 parallel tasks. The task function that is being called is a long running function and can generate errors. I wanted to detect that from the caller and revive the task.
While I understand that handling the exception "inside the task" and rerunning the task is a solution, I was also thinking if there are means to detect this from the caller itself and run the failing task again. When I revive the task it should fall under the bucket of 20 tasks running so that the calling process continues to wait for all the tasks running including the new revived task via Await Task.WhenAll(tasks)
Here is the code on the caller side that creates 20 different task and waits for all of them to finish.
Try
watch = New Stopwatch
watch.Start()
cs = New CancellationTokenSource
Timer1.Enabled = True
Dim tasks = Enumerable.Range(1, 20).Select(Function(i) Task.Run(Function() GetAndProcessMessagesAsync(cs.Token)))
Await Task.WhenAll(tasks)
Catch ex As Exception
MsgBox(ex.Message)
Timer1.Enabled = False
End Try
PS: I have checked the below links for a solution but did not find something suitable that I could try. But definitely would think another method could be to use the TaskContinuationOptions option but am really not sure how and where that would fit in my code above.
Task Parallel Library - How do you get a Continuation with TaskContinuationOptions.OnlyOnCanceled to Fire?
catching parallel tasks if they stop prematurely
I think that doing it inside the Task is the best option, but that doesn't mean you have to change GetAndProcessMessagesAsync. The code could look like this:
Async Function RetryAsync(func As Func(Of Task)) As Task
While True
Try
Await func()
Catch
' TODO: don't catch all exceptions all the time
End Try
End While
End Function
…
Dim tasks = Enumerable.Range(1, 20).Select(
Function(i) Task.Run(
Function() RetryAsync(
Function() GetAndProcessMessagesAsync(cs.Token))))
Also, if GetAndProcessMessagesAsync is truly asynchronous, you might not need that Task.Run.