How to emit latest value with delay from faster original flow? - kotlin

I have 2 flows. First flow updates every 50ms. I have the second flow which is equal to first flow but i want it to produce latest value from original flow every 300ms. I found debounce extension for flows but it doesn't work (note from docs):
Note that the resulting flow does not emit anything as long as the original flow emits items faster than every timeoutMillis milliseconds.
So when i'm trying to emit value each 300 ms with debounce it doesn't emit at all because original flow is faster than i need. So how can i make something like that:
delay 300ms
check original flow for latest value
emit this value
repeat
my flows right now:
// fast flow (50ms)
val orientationFlow = merge(_orientationFlow, locationFlow)
val cameraBearingFlow = orientationFlow.debounce(300ms)
P.S. this approach doesn't fit because we are delaying value so it's not fresh after 300ms. I need to get the freshest value after 300ms:
val cameraBearingFlow = azimuthFlow.onEach {
delay(ORIENTATION_UPDATE_DELAY)
it
}

You need sample instead of debounce
val f = fastFlow.sample(300)

Related

How to print size of flow in kotlin

Hey I am new in kotlin flow. I am trying to print flow size. As we know that list has size() function. Do we have something similar function for flow.
val list = mutableListof(1,2,3)
println(list.size)
output
2
How do we get size value in flow?
dataMutableStateFlow.collectLatest { data ->
???
}
Thanks
A Flow doesn't know its size at any moment, because there is an unknown number of future values to be emitted. Also, Flows do not keep a record of how many values they have emitted in the past.
Sequences have the same problem. With both Flows and Sequences, you can only get the count by doing something terminal with them, something that iterates through it all.
The only way to get the size of the Flow is to do something that iterates through the entire Flow. For instance, you can call the suspend function count() on a Flow to get its size. The more complicated way to do it would be to create a count variable and then increment the count inside a collect call. However, counting the emissions of a Flow is only usable for finite cold Flows. Hot flows (SharedFlow and StateFlow) are never finite, and many cold Flows are also infinite.

What is the difference between ExpectedConditions.presenceOf and element.isPresent

I have been testing with element.IsPresent(), for some reason my test start failing when trying to do browser.wait(element(by.id('id').IsPresent()). It never get away from the wait, even the element being present.
I started using the code with protractor.ExpectedConditions and it start working. just want to know what is the difference between one and the other.
Here is the code with the Expected Conditions.
const EC = protractor.ExpectedConditions;
const ele = element(by.id('id'));
return browser.wait(EC.presenceOf(ele));
What is the main difference between one and the other? I have search in google but haven't found a proper answer.
If you take a look at this answer on another question you will see that both presenceOf() and isPresent() are almost entirely the same. The primary difference being that presenceOf() wraps around isPresent() and returns a Function rather than a Promise.
So why is this important? Well it has to do with how browser.wait() works. If we take a look at the docs we can see that it:
Schedules a command to wait for a condition to hold or promise to be resolved.
This means that if you pass a Promise to browser.wait() it will only wait until that Promise is resolved before continuing on and executing further commands (it doesn't necessarily matter if it resolved true or false). Whereas if you pass a Function to it, it will wait until that condition "holds" before continuing.
Additionally, you can specify a custom timeout parameter for the browser.wait() method. If you do not specify a timeout it will default to 30 seconds according to the docs. I believe this is why you felt that the wait never resolves when using isPresent() (it was likely just taking 30 seconds).
What I would suggest to do is use isPresent() when you expect an element to be present at a specific moment in time and use presenceOf() when you want to wait for an element to be or become present.
Here's an example of how I would use the two:
const EC = protractor.ExpectedConditions;
const ele = element(by.id('id'));
browser.wait(EC.presenceOf(ele), 5000); // Waits a maximum of 5000 milliseconds (5 seconds)
expect(ele.isPresent()).toBe(true); // Expects this element to be present **right now**

How to get the latest value emited from observable

Suppose, i have a timer which emits one item after one second interval.
I want to subscribe to it and execute it for 10 seconds. After 10 seconds i unsubscribe from it and then i want to be able to have access to its last emmited value from some other part of the code.
Here is sample code:
#Test
fun testMeasuretime(){
val emitter = Observable.interval(1, TimeUnit.SECONDS)
.doOnNext{t: Long? -> Log.v("emitter", t.toString())}
val disposable = emitter.subscribe()
Thread.sleep(10000)
disposable.dispose()
Thread.sleep(5000)
//get the last emited value
Thread.sleep(5000)
}
Is there a way to get the last emited value from ohter part of the code?
I want to use this solution to just measure time execution of some task.
You can get the last value by either using doOnNext() to set the value of a class field, or inserting a BehaviorSubject in the path.
However, I don't think your code is actually measuring the CPU time of execution of the task. Instead, it is capturing the start and end points of processing on possibly an unrelated thread. So, the best you can get is the wall-clock time.
A better approach is something like (written in Java, since I don't know Kotlin):
long start = System.nanoTime();
performTaskOnThisThread();
long end = System.nanoTime();
long durationInNanoSeconds = end - start;

How to postpone an event in Spark/RabbitMQ

We are designing a system that will stream event using RabbitMQ (maybe later kafka) and spark streaming. Some or our events have been broken to several event types, so not to have too big of an event. This means that certain events have to wait for the other events (with the same id). We cannot proceed with processing until all events for a specific event have arrived.
Is there a way to delay the processing of an event until the next processing window in spark streaming (if the other event has not arrived)
Thank
Nir
From an architectural perspective, there are questions to consider:
how do you determine that all events have arrived?
what happens if one event gets lost?
what happens if events arrive out of order? Last first and similar?
In principle it would seem that breaking down an event in parts that was originally formed as a whole would increase the complexity and affect the reliability of the system.
To answer to the question in any case, since Spark 1.6.x a new stateful streaming function has been introduced: mapWithState. mapWithState allows you to keep state information per key and issue zero or more events of the same or different type in response to an incoming event.
Applied to this case, we could think of modelling the state as State[PartialEvent]: as events come in, they are assembled in a PartialEvent object. Once the criteria that an event is complete has been fulfilled, mapWithState can generate a WholeEvent object to be processed downstream.
The process would roughly(*) look like this:
val sourceEventDStream:DStream[Event] = ???
def stateUpdateFunction(eventId:String, event: Event, partialEventState: State[PartialEvent]): Option[WholeEvent] = {
val eventState = partialEventState.get() // Get current state of the event
val updatedEvent = merge(eventState, event)
if (updatedEvent.isComplete) {
partialEventState.remove()
Some(WholeEvent(updatedEvent))
} else {
partialEventState.update(updatedEvent)
None
}
}
val wholeEventDStream:DStream[WholeEvent] = sourceEventDStream.mapWithState(StateSpec.function(stateUpdateFunction))
//do stuff with wholeEventDStream ...
As you could observe, with this approach, any PartialEvent that never completes will stay in the state forever. We also need a unique key to identify events that belong together. There're timeout options that must be considered to cover for the failure cases, but the bottom line is that preserving a whole event through the pipeline would be a better approach, if technically possible.
(*) not compiled or tested. Provided only to illustrate the idea.

Replay Recorded Data Stream in F#

I have recorded real-time stock quotes in an SQL database with fields Id, Last, and TimeStamp. Last being the current stock price (as a double), and TimeStamp is the DateTime when the price change was recorded.
I would like to replay this stream in the same way it came in, meaning if a price change was originally 12 seconds apart then the price change event firing (or something similar) should be 12 seconds apart.
In C# I might create a collection, sort it by the DateTime then fire an event using the difference in time to know when to fire off the next price change. I realize F# has a whole lot of cool new stuff relating to events, but I don't know how I would even begin this in F#. Any thoughts/code snippets/helpful links on how I might proceed with this?
I think you'll love the F# solution :-).
To keep the example simple, I'm storing the price and timestamps in a list containing tuples (the first element is the delay from the last update an the second element is the price). It shouldn't be too difficult to turn your input data into this format. For example:
let prices = [ (0, 10.0); (1000, 10.5); (500, 9.5); (2500, 8.5) ]
Now we can create a new event that will be used to replay the process. After creating it, we immediatelly attach some handler that will print the price updates:
let evt = new Event<float>()
evt.Publish.Add(printfn "Price updated: %f")
The last step is to implement the replay - this can be done using asynchronous workflow that loops over the values, asynchronously waits for the specified time and then triggers the event:
async { for delay, price in prices do
do! Async.Sleep(delay)
evt.Trigger(price) }
|> Async.StartImmediate
I'm starting the workflow using StartImmediate which means that it will run on the current thread (the waiting is asynchronous, so it doesn't block the thread). Keeping everything single-threaded makes it a bit simpler (e.g. you can safely access GUI controls).
EDIT To wrap the functionality in some component that could be used from other parts of the application, you could define a type like this:
type ReplyDataStream(prices) =
let evt = new Event<float>()
member x.Reply() =
// Start the asynchronous workflow here
member x.PriceChanged =
evt.Publish
The users can then create an instance of the type, attach their event handlers using stream.PriceChanged.Add(...) and then start the replaying the recorded changes using Reply()