RxJava timeout without emiting error? - operators

Is there an option to have variant of timeout that does not emit Throwable?
I would like to have complete event emited.

You don't need to map errors with onErrorResumeNext. You can just provide a backup observable using:
timeout(long,TimeUnit,Observable)
It would be something like:
.timeout(500, TimeUnit.MILLISECONDS, Observable.empty())

You can resume from an error with another Observable, for example :
Observable<String> data = ...
data.timeout(1, TimeUnit.SECONDS)
.onErrorResumeNext(Observable.empty())
.subscribe(...);

A simpler solution that does not use Observable.timeout (thus it does not generate an error with the risk of catching unwanted exceptions) might be to simply take until a timer completes:
Observable<String> data = ...
data.takeUntil(Observable.timer(1, TimeUnit.SECOND))
.subscribe(...);

You can always use onErrorResumeNext which will get the error and you can emit whatever item you want-
/**
* Here we can see how onErrorResumeNext works and emit an item in case that an error occur in the pipeline and an exception is propagated
*/
#Test
public void observableOnErrorResumeNext() {
Subscription subscription = Observable.just(null)
.map(Object::toString)
.doOnError(failure -> System.out.println("Error:" + failure.getCause()))
.retryWhen(errors -> errors.doOnNext(o -> count++)
.flatMap(t -> count > 3 ? Observable.error(t) : Observable.just(null)),
Schedulers.newThread())
.onErrorResumeNext(t -> {
System.out.println("Error after all retries:" + t.getCause());
return Observable.just("I save the world for extinction!");
})
.subscribe(s -> System.out.println(s));
new TestSubscriber((Observer) subscription).awaitTerminalEvent(500, TimeUnit.MILLISECONDS);
}

Related

WorkManger doesn't trigger after manually stopped- Kotlin

I want to use workMager to do some work every 15min,at the same time I want to stop workManger when I clicked on the button "StopThread" below is my Code:
val workManager = WorkManager.getInstance(applicationContext)
val workRequest = PeriodicWorkRequest.Builder(
RandomNumberGeneratorWorker::class.java,
15,
TimeUnit.MINUTES
).addTag("API_Worker")
.build()
binding.buttonThreadStarter.setOnClickListener {
workManager.enqueue(workRequest)
}
binding.buttonStopthread.setOnClickListener {
workManager.cancelAllWorkByTag("API_Worker")
}
And this is the RandomNumberGeneratorWorker
class RandomNumberGeneratorWorker(
context: Context,
params: WorkerParameters
) :
Worker(context, params) {
private val MIN = 0
private val MAX = 100
private var mRandomNumber = 0
override fun doWork(): Result {
Log.d("worker_info","Job Started")
startRandomNumberGenerator();
return Result.success();
}
override fun onStopped() {
super.onStopped()
Log.i("worker_info", "Worker has been cancelled")
}
private fun startRandomNumberGenerator() {
Log.d("worker_info","startRandomNumberGenerator triggered")
var i = 0
while (i < 100 && !isStopped) {
try {
Thread.sleep(1000)
mRandomNumber = (Math.random() * (MAX - MIN + 1)).toInt() + MIN
Log.i(
"worker_info",
"Thread id: " + Thread.currentThread().id + ", Random Number: " + mRandomNumber
)
i++
} catch (e: InterruptedException) {
Log.i("worker_info", "Thread Interrupted")
}
}
}
}
The issue that I'm facing is when I stopped the workManger it didn't work again when I clicked on buttonThreadStarter
I did a little research and I found that I can start-stop-start..etc workManger with the code below :
val workRequest = OneTimeWorkRequest.from(RandomNumberGeneratorWorker::class.java)
binding.buttonThreadStarter.setOnClickListener {
workManager.beginUniqueWork("WorkerName",ExistingWorkPolicy.REPLACE,workRequest)
}
binding.buttonStopthread.setOnClickListener {
workManager.cancelAllWork()
}
but as you can see it's working when I used OneTimeWorkRequest and with that, I can't repeat the work every 15 mins , Any suggestion in how to resolve this issue
WorkManager is not designed for periodic works with exact timing. In reality, the works "are not even periodic".
As you can see here from the logs:
https://developer.android.com/topic/libraries/architecture/workmanager/how-to/debugging#use-alb-shell0dumpsys-jobscheduler
WorkManager delegates to the JobScheduler. JS jobs work in a way that you have a number of explicit(you set them) and implicit(set by the system) constraints and after all of them are satisfied - the job starts.
When you have a period there is an extra constraint - TIMING_DELAY. So if your 15min pass - this doesn't mean in no way that the job will be executed. There might be, and be sure that there will be other constraints. That is the case because WM is designed for resource optimization and it will ensure that the work will finish at some point, even on device restart. But it is not designed to be exact. It is quite the opposite.
And after all the constraints are satisfied - it might take a day, the job is no longer needed and a new job is created with again your 15min constraint - TIMING_DELAY. And the process starts again.
Also - when you say "doesn't trigger" - please, check why. Try to check the debug output from the JS and see if there is work at all. If there is - check what constraints are not satisfied.
But long story short - "every 15min" is not something for WorkManager. Normally you should use AlaramManager for exact timing, but with such a short interval you should try to consider using a Service.
Also, it is dangerous to call: cancelAllWork(). You might break the code of some library in your app. You should better use tags and cancel by tag.

Gatling feeder/parameter issue - Exception in thread "main" java.lang.UnsupportedOperationException

I just involved the new project for API test for our service by using Gatling. At this point, I want to search query, below is the code:
def chnSendToRender(testData: FeederBuilderBase[String]): ChainBuilder = {
feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
.doIf(session => session("searchStatus").as[Int] == 200) {
exec { session =>
printConsoleLog("Rendered Asset ID List: " + session("renderedAssetList").as[String], "INFO")
session
}
}
}
I declared the feeder already in the simulation scala file:
class GVRERenderEditor_new extends Simulation {
private val edlToRender = csv("data/render/edl_asset_ids.csv").queue
private val chnPostRender = components.notifications.notice.JobsPolling_new.chnSendToRender(edlToRender)
private val scnSendEDLForRender = scenario("Search Post Render")
.exitBlockOnFail(exec(preSimAuth))
.exec(chnPostRender)
setUp(
scnSendEDLForRender.inject(atOnceUsers(1)).protocols(httpProtocol)
)
.maxDuration(sessionDuration.seconds)
.assertions(global.successfulRequests.percent.is(100))
}
But Gatling test failed to run, showing this error: Exception in thread "main" java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
If I hardcode the #{edlAssetId} (put the real edlAssetId in that query), I will get result. I think I passed the parameter wrongly in this case. I've tried to print the output in console log but no luck. What's wrong with this code? I would appreciate your help. Thanks!
feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
You're missing a . (dot) before the exec to attach it to the feed.
As a result, your method is returning the last instruction, ie the exec only.

Retry is executed only once in Spring Cloud Stream Reactive

When I try again in Spring Cloud Stream Reactive, a situation that I don't understand arises, so I ask a question.
In case of sending String type data per second, after processing in s-c-stream Function, I intentionally caused RuntimeException according to conditions.
#Bean
fun test(): Function<Flux<String>, Flux<String>?> = Function{ input ->
input.map { sellerId ->
if(sellerId == "I-ZICGO")
throw RuntimeException("intentional")
else
log.info("do normal: {}", sellerId)
sellerId
}.retryWhen(Retry.from { companion ->
companion.map { rs ->
if (rs.totalRetries() < 3) { // retrying 3 times
log.info("retry!!!: {}", rs.totalRetries())
rs.totalRetries()
}
else
throw Exceptions.propagate(rs.failure())
}
})
}
However, the result of running the above logic is:
2021-02-25 16:14:29.319 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 0 subscriber(s).
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] k.c.m.c.service.impl.ItemServiceImpl : retry!!!: 0
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 1 subscriber(s).
Retry is processed only once.
Should I change from reactive to imperative to fix this?
In short, yes. The retry settings are meaningless for reactive functions. You can see a more details explanation on the similar SO question here

ReactiveX collect elements processed before a failure

I'm using RxJava to create a background job syncronizing my db.
It connects to an external source and start to process entries, map them and insert in the db.
When it ends I need the list with all the elements processed, I can get it when everything goes right, but how can I collect all the elements processed if during the flow something fail?
final List<String> res = Observable.create(onSubscribe)
.buffer(4)
.flatMap(TestRx::doStuff)
.buffer(8)
.map(TestRx::calculateList)
.toList()
.toBlocking()
.single();
System.out.println("strings = " + res);
What I would like to have is a way that if doStuff or calculateList throw exceptions, the flow stop an returns the list with everything it processed until the error.
List<String> res = Observable.create(onSubscribe)
.buffer(4)
.flatMap(TestRx::doStuff)
.onErrorResumeNext(Observable.empty()) // turn error into completion
.buffer(8)
.map(TestRx::calculateList)
.onErrorResumeNext(Observable.empty()) // turn error into completion
.toList()
.toBlocking()
.single();
System.out.println("strings = " + res);

How to recover from akka.stream.io.Framing$FramingException

On: akka-stream-experimental_2.11 1.0.
We are using Framing.delimiter in a Tcp server. When a message arrives with length greater than maximumFrameLength the FramingException is thrown and we could capture it from OnError of the ActorSubscriber.
Server Code:
def bind(address: String, port: Int, target: ActorRef, maxInFlight: Int, maxFrameLength: Int)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach {
conn: Tcp.IncomingConnection =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target, maxInFlight))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = maxFrameLength, allowTruncation = true))
.map(raw ⇒ Message(raw))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
Subscriber code:
class TargetSubscriber(target: ActorRef, maxInFlight: Int) extends ActorSubscriber with ActorLogging {
private var inFlight = 0
override protected def requestStrategy = new MaxInFlightRequestStrategy(maxInFlight) {
override def inFlightInternally = inFlight
}
override def receive = {
case OnNext(msg: Message) ⇒
target ! msg
inFlight += 1
case OnError(t) ⇒
inFlight -= 1
log.error(t, "Subscriber encountered error")
case TargetAck(_) ⇒
inFlight -= 1
}
}
Problem:
Messages that are under the max frame length do not flow after this exception for that incoming connection. killing the client and re running it works fine.
ActorSubscriber does not honor supervision
What is the correct way to skip the bad message and continue with the next good message ?
Have you tried to put supervision on the targetFlow sink instead of the whole materialiser? I don't see it anywhere here and I believe it should be set on that flow directly.
Stil this is more a guess than science ;)
I had the same exception reading from a file, and for me it was solved by putting a return after last line.