WorkManger doesn't trigger after manually stopped- Kotlin - kotlin

I want to use workMager to do some work every 15min,at the same time I want to stop workManger when I clicked on the button "StopThread" below is my Code:
val workManager = WorkManager.getInstance(applicationContext)
val workRequest = PeriodicWorkRequest.Builder(
RandomNumberGeneratorWorker::class.java,
15,
TimeUnit.MINUTES
).addTag("API_Worker")
.build()
binding.buttonThreadStarter.setOnClickListener {
workManager.enqueue(workRequest)
}
binding.buttonStopthread.setOnClickListener {
workManager.cancelAllWorkByTag("API_Worker")
}
And this is the RandomNumberGeneratorWorker
class RandomNumberGeneratorWorker(
context: Context,
params: WorkerParameters
) :
Worker(context, params) {
private val MIN = 0
private val MAX = 100
private var mRandomNumber = 0
override fun doWork(): Result {
Log.d("worker_info","Job Started")
startRandomNumberGenerator();
return Result.success();
}
override fun onStopped() {
super.onStopped()
Log.i("worker_info", "Worker has been cancelled")
}
private fun startRandomNumberGenerator() {
Log.d("worker_info","startRandomNumberGenerator triggered")
var i = 0
while (i < 100 && !isStopped) {
try {
Thread.sleep(1000)
mRandomNumber = (Math.random() * (MAX - MIN + 1)).toInt() + MIN
Log.i(
"worker_info",
"Thread id: " + Thread.currentThread().id + ", Random Number: " + mRandomNumber
)
i++
} catch (e: InterruptedException) {
Log.i("worker_info", "Thread Interrupted")
}
}
}
}
The issue that I'm facing is when I stopped the workManger it didn't work again when I clicked on buttonThreadStarter
I did a little research and I found that I can start-stop-start..etc workManger with the code below :
val workRequest = OneTimeWorkRequest.from(RandomNumberGeneratorWorker::class.java)
binding.buttonThreadStarter.setOnClickListener {
workManager.beginUniqueWork("WorkerName",ExistingWorkPolicy.REPLACE,workRequest)
}
binding.buttonStopthread.setOnClickListener {
workManager.cancelAllWork()
}
but as you can see it's working when I used OneTimeWorkRequest and with that, I can't repeat the work every 15 mins , Any suggestion in how to resolve this issue

WorkManager is not designed for periodic works with exact timing. In reality, the works "are not even periodic".
As you can see here from the logs:
https://developer.android.com/topic/libraries/architecture/workmanager/how-to/debugging#use-alb-shell0dumpsys-jobscheduler
WorkManager delegates to the JobScheduler. JS jobs work in a way that you have a number of explicit(you set them) and implicit(set by the system) constraints and after all of them are satisfied - the job starts.
When you have a period there is an extra constraint - TIMING_DELAY. So if your 15min pass - this doesn't mean in no way that the job will be executed. There might be, and be sure that there will be other constraints. That is the case because WM is designed for resource optimization and it will ensure that the work will finish at some point, even on device restart. But it is not designed to be exact. It is quite the opposite.
And after all the constraints are satisfied - it might take a day, the job is no longer needed and a new job is created with again your 15min constraint - TIMING_DELAY. And the process starts again.
Also - when you say "doesn't trigger" - please, check why. Try to check the debug output from the JS and see if there is work at all. If there is - check what constraints are not satisfied.
But long story short - "every 15min" is not something for WorkManager. Normally you should use AlaramManager for exact timing, but with such a short interval you should try to consider using a Service.
Also, it is dangerous to call: cancelAllWork(). You might break the code of some library in your app. You should better use tags and cancel by tag.

Related

Gatling feeder/parameter issue - Exception in thread "main" java.lang.UnsupportedOperationException

I just involved the new project for API test for our service by using Gatling. At this point, I want to search query, below is the code:
def chnSendToRender(testData: FeederBuilderBase[String]): ChainBuilder = {
feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
.doIf(session => session("searchStatus").as[Int] == 200) {
exec { session =>
printConsoleLog("Rendered Asset ID List: " + session("renderedAssetList").as[String], "INFO")
session
}
}
}
I declared the feeder already in the simulation scala file:
class GVRERenderEditor_new extends Simulation {
private val edlToRender = csv("data/render/edl_asset_ids.csv").queue
private val chnPostRender = components.notifications.notice.JobsPolling_new.chnSendToRender(edlToRender)
private val scnSendEDLForRender = scenario("Search Post Render")
.exitBlockOnFail(exec(preSimAuth))
.exec(chnPostRender)
setUp(
scnSendEDLForRender.inject(atOnceUsers(1)).protocols(httpProtocol)
)
.maxDuration(sessionDuration.seconds)
.assertions(global.successfulRequests.percent.is(100))
}
But Gatling test failed to run, showing this error: Exception in thread "main" java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
If I hardcode the #{edlAssetId} (put the real edlAssetId in that query), I will get result. I think I passed the parameter wrongly in this case. I've tried to print the output in console log but no luck. What's wrong with this code? I would appreciate your help. Thanks!
feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
You're missing a . (dot) before the exec to attach it to the feed.
As a result, your method is returning the last instruction, ie the exec only.

Retrofit OkHttp - "unexpected end of stream"

I am getting "Unexpected end of stream" while using Retrofit (2.9.0) with OkHttp3 (4.9.1)
Retrofit configuration:
interface ApiServiceInterface {
companion object Factory{
fun create(): ApiServiceInterface {
val interceptor = HttpLoggingInterceptor()
interceptor.level = HttpLoggingInterceptor.Level.BODY
val client = OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(30, TimeUnit.SECONDS)
.readTimeout(30,TimeUnit.SECONDS)
.addInterceptor(Interceptor { chain ->
chain.request().newBuilder()
.addHeader("Connection", "close")
.addHeader("Accept-Encoding", "identity")
.build()
.let(chain::proceed)
})
.retryOnConnectionFailure(true)
.connectionPool(ConnectionPool(0, 5, TimeUnit.MINUTES))
.protocols(listOf(Protocol.HTTP_1_1))
.build()
val gson = GsonBuilder().setLenient().create()
val retrofit = Retrofit.Builder()
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create(gson))
.baseUrl("http://***.***.***.***:****")
.client(client)
.build()
return retrofit.create(ApiServiceInterface::class.java)
}
}
#Headers("Content-type: application/json", "Connection: close", "Accept-Encoding: identity")
#POST("/")
fun requestAsync(#Body data: JsonObject): Deferred<Response>
}
So far I have found out the following:
This issue only occurs for me while using Android Studio emulators running from Windows series OS (7, 10, 11) - this was reproduced on 2 different laptops from different networks.
If running Android Studio emulators under OS X the issue won't reproduce in 100% cases.
ARC/Postman clients never has any issues completing same requests to my backend.
On running from Windows Android Studio emulators this issue reproduces in about 10-50% requests, other requests work without problem.
The identical requests can result in this error or complete sucessfully.
Responses which take about 11 sec to complete can result in success, while responses which take about 100 msec to complete can result in this error.
Commenting off .client(client) from retrofit configuration eliminates this issue, but I loose the opportunity to use interceptors and other OkHttp functionality.
Adding headers (Connection: close, Accept-Encoding: identity) does not solve issue.
Turning retryOnConnectionFailure on or off has no impact on issue as well.
Changing HttpLoggingInterceptor level or removing it completely does not solve issue.
Server-side configuration:
const http = require('http');
const server = http.createServer((req, res) => {
const callback = function(code, request, data) {
let result = responser(code, request, data);
res.writeHead(200, {
'Content-Type' : 'x-application/json',
'Connection': 'close',
'Content-Length': Buffer.byteLength(result)
});
res.end(result);
};
...
}
server.listen(process.env.PORT, process.env.HOSTNAME, () => {
console.log(`Server is running`);
});
So, based on 1,2,3 - this is unlikely server-side issue.
Based on 4, 5, 6 - it is not malformed request related or execution time related issue.
Guessing from 7 - this issue roots lay in OkHttp rather than Retrofit itself.
I have read almost half of stackoverflow is search of resolution, like:
unexpected end of stream retrofit
Retrofit OkHttp unexpected end of stream on Connection error
and also discussion at OkHttp on Github:
https://github.com/square/okhttp/issues/3682
https://github.com/square/okhttp/issues/3715
But nothing helped so far.
Any idea what might be causing the problem?
Update
I've got more info on situation.
First, I changed headers on backend to not to pass Content-Length and pass Transfer-Encoding : identity instead. I don't know why, but Postman gives an error if theese headers are present both, saying it is not right.
res.writeHead(200, {
'Content-Type' : 'x-application/json',
'Connection': 'close',
'Transfer-Encoding': 'identity'
});
After that I started to receive another error on Windows hosted Android Studio emulators (with equal ratio of fail / success to "Unexpected end of stream")
2021-12-09 14:58:19.696 401-401/? D/P2P-> FRG DEBUG:: java.io.EOFException: End of input at line 1 column 1807 path $.meta
at com.google.gson.stream.JsonReader.nextNonWhitespace(JsonReader.java:1397)
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:483)
at com.google.gson.stream.JsonReader.hasNext(JsonReader.java:415)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:216)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:40)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:27)
at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
at retrofit2.OkHttpCall$1.onResponse(OkHttpCall.java:153)
at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:519)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
Spending a lot of time debugging this issue I have found that this exception was generated by JsonReader.java in method nextNonWhitespace where it try to to get colons, double quotes and curly or square braces to compose json object from decoded as char array buffer.
This buffer itself is received in fillBuffer method of the same module and it has length limit of 1024 elements. In my case the backend response is longer that this value (1807 chars), so while JsonReader.java parses my response as json object it do this in 2 iterations.
Each iteration it fills the buffer here:
int total;
while ((total = in.read(buffer, limit, buffer.length - limit)) != -1) {
limit += total;
// if this is the first read, consume an optional byte order mark (BOM) if it exists
if (lineNumber == 0 && lineStart == 0 && limit > 0 && buffer[0] == '\ufeff') {
pos++;
lineStart++;
minimum++;
}
if (limit >= minimum) {
return true;
}
}
the read method is called on ResponseBody.kt class from okhttp3
#Throws(IOException::class)
override fun read(cbuf: CharArray, off: Int, len: Int): Int {
if (closed) throw IOException("Stream closed")
val finalDelegate = delegate ?: InputStreamReader(
source.inputStream(),
source.readBomAsCharset(charset)).also {
delegate = it
}
return finalDelegate.read(cbuf, off, len)
}
The main problem is:
At first iteration all goes well, ResponseBody.kt "reads" first 1024 chars and gives them to JsonReader.java where it composes a part of response object.
When second iteration comes ResponseBody.kt "reads" the last part of response and fills with it the start of char buffer, so char buffer now contains as its first elements the tail of response, and after that - all elements which was left there after firts iteration.
The main problem is that it im most cases (about 80%) looses last char from response, in about 10% in looses 2 last chars from response and in about 10% it reads all chars. Here is shots:
It must contains 783 chars to complete json, but as shown at line 1290 it receives only 782.
Looking at buffer itself
the char at 782 index (783 in order) must be second curly brace that closes json root, but instead of it there are leftovers from first iteration started. This results in exception mentioned above.
Now if we look at situation where requests finished successfully:
With the same request it occasionly returns valid number of chars: 783
And the buffer itself is:
Now the second brace is present where it must be.
In this case request will be successfull.
The same response ending from Postman:
The Postman success rate in parsing response is 100%, the same is true for OS X hosted android studio emulators and real devices I've used.
Update 2
It seems full buffer obtained in RealBufferedSource.kt:
internal inline fun RealBufferedSource.commonSelect(options: Options): Int {
check(!closed) { "closed" }
while (true) {
val index = buffer.selectPrefix(options, selectTruncated = true)
when (index) {
-1 -> {
return -1
}
-2 -> {
// We need to grow the buffer. Do that, then try it all again.
if (source.read(buffer, Segment.SIZE.toLong()) == -1L) return -1
}
else -> {
// We matched a full byte string: consume it and return it.
val selectedSize = options.byteStrings[index].size
buffer.skip(selectedSize.toLong())
return index
}
}
}
}
and here it is already missing last char:
Update 3
Found this unsolved question which is exactly the same behavior:
Retrofit Json data truncated
Also comment from Android Studio emulators issues tracker:
https://issuetracker.google.com/issues/119027639#comment9
OK, It took some time, but I've found what was going wrong and how to workaround that.
When Android Studio's emulators running in Windows series OS (checked for 7 & 10) receive json-typed reply from server with retrofit it can with various probability loose 1 or 2 last symbols of the body when it is decoded to string, this symbols contain closing curly brackets and so such body could not be parsed to object by gson converter which results in throwing exception.
The idea of workaround I found is to add an interceptor to retrofit which would check the decoded to string body if its last symbols match those of valid json response and add them if they are missed.
interface ApiServiceInterface {
companion object Factory{
fun create(): ApiServiceInterface {
val interceptor = HttpLoggingInterceptor()
interceptor.level = HttpLoggingInterceptor.Level.BODY
val stringInterceptor = Interceptor { chain: Interceptor.Chain ->
val request = chain.request()
val response = chain.proceed(request)
val source = response.body()?.source()
source?.request(Long.MAX_VALUE)
val buffer = source?.buffer()
var responseString = buffer?.clone()?.readString(Charset.forName("UTF-8"))
if (responseString != null && responseString.length > 2) {
val lastTwo = responseString.takeLast(2)
if (lastTwo != "}}") {
val lastOne = responseString.takeLast(1)
responseString = if (lastOne != "}") {
"$responseString}}"
} else {
"$responseString}"
}
}
}
val contentType = response.body()?.contentType()
val body = ResponseBody.create(contentType, responseString ?: "")
return#Interceptor response.newBuilder().body(body).build()
}
val client = OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(30, TimeUnit.SECONDS)
.readTimeout(30,TimeUnit.SECONDS)
.addInterceptor(interceptor)
.addInterceptor(stringInterceptor)
.retryOnConnectionFailure(true)
.connectionPool(ConnectionPool(0, 5, TimeUnit.MINUTES))
.protocols(listOf(Protocol.HTTP_1_1))
.build()
val gson = GsonBuilder().create()
val retrofit = Retrofit.Builder()
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create(gson))
.addConverterFactory(ScalarsConverterFactory.create())
.baseUrl("http://3.124.6.203:5000")
.client(client)
.build()
return retrofit.create(ApiServiceInterface::class.java)
}
}
#Headers("Content-type: application/json", "Connection: close", "Accept-Encoding: identity")
#POST("/")
fun requestAsync(#Body data: JsonObject): Deferred<Response>
}
After this changes the issue didn't occure.

Hangfire executes job twice

I am using Hangfire.AspNetCore 1.7.17 and Hangfire.MySqlStorage 2.0.3 for software that is currently in production.
Now and then, we get a report of jobs being executed twice, despite the usage of the [DisableConcurrentExecution] attribute with a timeout of 30 seconds.
It seems that as soon as those 30 seconds have passed, another worker picks up that same job again.
The code is fairly straightforward:
public async Task ProcessPicking(HttpRequest incomingRequest)
{
var filePath = await StoreStreamAsync(incomingRequest, TriggerTypes.Picking);
var picking = await XmlHelper.DeserializeFileAsync<Picking>(filePath);
// delay with 20 minutes so outbound-out gets the chance to be send first
BackgroundJob.Schedule(() => StartPicking(picking), TimeSpan.FromMinutes(20));
}
[TriggerAlarming("[IMPORTANT] Failed to parse picking message to **** object.")]
[DisableConcurrentExecution(30)]
public void StartPicking(Picking picking)
{
var orderlinePickModels = picking.ToSalesOrderlinePickQuantityRequests().ToList();
var orderlineStatusModels = orderlinePickModels.ToSalesOrderlineStatusRequests().ToList();
var isParsed = DateTime.TryParse(picking.Order.UnloadingDate, out var unloadingDate);
for (var i = 0; i < orderlinePickModels.Count; i++)
{
// prevents bugs with usage of i in the background jobs
var index = i;
var id = BackgroundJob.Enqueue(() => SendSalesOrderlinePickQuantityRequest(orderlinePickModels[index], picking.EdiReference));
BackgroundJob.ContinueJobWith(id, () => SendSalesOrderlineStatusRequest(
orderlineStatusModels.First(x=>x.SalesOrderlineId== orderlinePickModels[index].OrderlineId),
picking.EdiReference, picking.Order.PrimaryReference, isParsed ? unloadingDate : DateTime.MinValue));
}
}
[TriggerAlarming("[IMPORTANT] Failed to send order line pick quantity request to ****.")]
[AutomaticRetry(Attempts = 2)]
[DisableConcurrentExecution(30)]
public void SendSalesOrderlinePickQuantityRequest(SalesOrderlinePickQuantityRequest request, string ediReference)
{
var audit = new AuditPostModel
{
Description = $"Finished job to send order line pick quantity request for item {request.Itemcode}, part of ediReference {ediReference}.",
Object = request,
Type = AuditTypes.SalesOrderlinePickQuantity
};
try
{
_logger.LogInformation($"Started job to send order line pick quantity request for item {request.Itemcode}.");
var response = _service.SendSalesOrderLinePickQuantity(request).GetAwaiter().GetResult();
audit.StatusCode = (int)response.StatusCode;
if (!response.IsSuccessStatusCode) throw new TriggerRequestFailedException();
audit.IsSuccessful = true;
_logger.LogInformation("Successfully posted sales order line pick quantity request to ***** endpoint.");
}
finally
{
Audit(audit);
}
}
It schedules the main task (StartPicking) that creates the objects required for the two subtasks:
Send picking details to customer
Send statusupdate to customer
The first job is duplicated. Perhaps the second job as well, but this is not important enough to care about as it just concerns a statusupdate. However, the first job causes the customer to think that more items have been picked than in reality.
I would assume that Hangfire updates the state of a job to e.g. in progress, and checks this state before starting a job. Is my time-out on the disabled concurrent execution too low? Is it possible in this scenario that the database connection to update the state takes about 30 seconds (to be fair, it is running on a slow server with ~8GB Ram, 6 vCores) due to which the second worker is already picking up the job again?
Or is this a Hangfire specific issue that must be tackled?

Perl6: check if STDIN has data

In my Perl 6 script, I want to do a (preferably non-blocking) check of standard input to see if data is available. If this is the case, then I want to process it, otherwise I want to do other stuff.
Example (consumer.p6):
#!/usr/bin/perl6
use v6.b;
use fatal;
sub MAIN() returns UInt:D {
while !$*IN.eof {
if some_fancy_check_for_STDIN() { #TODO: this needs to be done.
for $*IN.lines -> $line {
say "Process '$line'";
}
}
say "Do something Else.";
}
say "I'm done.";
return 0;
}
As a STDIN-Generator I wrote another Perl6 script (producer.p6):
#!/usr/bin/perl6
use v6.b;
use fatal;
sub MAIN() returns UInt:D {
$*OUT.say("aaaa aaa");
sleep-until now+2;
$*OUT.say("nbbasdf");
sleep-until now+2;
$*OUT.say("xxxxx");
sleep-until now+2;
return 0;
}
If consumer.p6 works as expected, it should produce the following output, if called via ./producer.p6 | ./consumer.p6:
Process 'aaaa aaa'
Do something Else.
Process 'nbbasdf'
Do something Else.
Process 'xxxxx'
Do something Else.
I'm done.
But actually, it produces the following output (if the if condition is commented out):
Process 'aaaa aaa'
Process 'nbbasdf'
Process 'xxxxx'
Do something Else.
I'm done.
You are using an old version of Perl 6, as v6.b is from before the official release of the language.
So some of what I have below may need a newer version to work.
Also why are you using sleep-until now+2 instead of sleep 2?
One way to do this is to turn the .lines into a Channel, then you can use .poll.
#!/usr/bin/env perl6
use v6.c;
sub MAIN () {
# convert it into a Channel so we can poll it
my $lines = $*IN.Supply.lines.Channel;
my $running = True;
$lines.closed.then: {$running = False}
while $running {
with $lines.poll() -> $line {
say "Process '$line'";
}
say "Do something Else.";
sleep ½;
}
say "I'm done.";
}
Note that the code above blocks at the my $lines = … line currently; so it doesn't start doing something until the first line comes in. To get around that you could do the following
my $lines = supply {
# unblock the $*IN.Supply.lines call
whenever start $*IN.Supply {
whenever .lines { .emit }
}
}.Channel;

RxJava timeout without emiting error?

Is there an option to have variant of timeout that does not emit Throwable?
I would like to have complete event emited.
You don't need to map errors with onErrorResumeNext. You can just provide a backup observable using:
timeout(long,TimeUnit,Observable)
It would be something like:
.timeout(500, TimeUnit.MILLISECONDS, Observable.empty())
You can resume from an error with another Observable, for example :
Observable<String> data = ...
data.timeout(1, TimeUnit.SECONDS)
.onErrorResumeNext(Observable.empty())
.subscribe(...);
A simpler solution that does not use Observable.timeout (thus it does not generate an error with the risk of catching unwanted exceptions) might be to simply take until a timer completes:
Observable<String> data = ...
data.takeUntil(Observable.timer(1, TimeUnit.SECOND))
.subscribe(...);
You can always use onErrorResumeNext which will get the error and you can emit whatever item you want-
/**
* Here we can see how onErrorResumeNext works and emit an item in case that an error occur in the pipeline and an exception is propagated
*/
#Test
public void observableOnErrorResumeNext() {
Subscription subscription = Observable.just(null)
.map(Object::toString)
.doOnError(failure -> System.out.println("Error:" + failure.getCause()))
.retryWhen(errors -> errors.doOnNext(o -> count++)
.flatMap(t -> count > 3 ? Observable.error(t) : Observable.just(null)),
Schedulers.newThread())
.onErrorResumeNext(t -> {
System.out.println("Error after all retries:" + t.getCause());
return Observable.just("I save the world for extinction!");
})
.subscribe(s -> System.out.println(s));
new TestSubscriber((Observer) subscription).awaitTerminalEvent(500, TimeUnit.MILLISECONDS);
}