Android service needs to periodically access internet - android-service

I need to access the internet periodically (every 5 minutes or so) to update a database with a background service. I have tried the following using Android 8 or 9:
Use a boot receiver to start a JobService.
In onStartJob of the class that extends JobService, create a LocationListener and request a single location update.
Have onStartJob schedule the job again to run in 5-10 minutes.
Return true from onStartJob.
In the OnLocationChanged of the LocationListener, write to a local file, and start a thread to make a PHP request to update the database.
Everything works fine while the underlying process is running. When the process dies, the service keeps periodically updating the local file, but the the URLConnection.getResponseCode() now throws an exception - java.net.ConnectionException: failed to connect to ...
Is there a way to get around this using the above approach? If not, how can I have a background service access the internet even after the underlying process dies?

Related

Asp.Net Core - Start background task and ensure only one are running

I would like to start a long running task from an API controller and return 200 when the task is started.
I want only one task running and if another request are coming in, the controller should check if there is already a task running.
It the task is running just forget the request. If not start a new task.
Was thinking of using this code fire and forget in ASP.NET Core with dependency alive to start the task. Then I need some thread safe place to store a IsRunning variable.
Have you checked Hangfire
?
In can be run in cluster mode and also you can query to check if a specific task in running.

WCF InstancePersistenceCommand Exception

I have a WCF application which consists in some async communications with ecternal services. When we start a new expedient, a new instance is created; it process data and send an xml to a external service and waits for the response. This response requires that a person review the xml and send the response so it usually it is delayed for a long time. For this reason, the workflow go to idle and we use persistence with AppFabric.
The fact is that sometime, when we receive the response, the next exception is raised:
The execution of the InstancePersistenceCommand named {urn:schemas-microsoft-com:System.Activities.Persistence/command}LoadWorkflowByInstanceKey was interrupted by an error.
Normally this error does not occur, it can occur very sporadically. However, we are trying to update the app to include a new functionality (it does not modify the workflow) but when the application is deployed to the server, the instances that were created with the old deployment and were waiting for the response, throw this exception when they receive the response from the external service. However, the instances initiated with the new deployment process the response without problem.
I have been looking for information about this problem but I haven't found much. Anybody can help me?
SOLUTION:
Thanks a lot for your answer, it may be helpful for me in the future. In this case, the problem was that I was updating an assembly version of one of the implicated project (to upload a nuget package) and for a reason that I don’t understand, the instances created with an old version raised this exception when the service with the new version had to manipulate the mentioned instances.
If I change the assembly version to upload the nuget and then set the original version and deploy with this version, everything works ok. Anybody knows what is the reason?
Thanks a lot.
This may be because there is a program running in the background and trying to extend the lock on the instance store every 30 seconds, and it seems that whenever the connection to the SQL service fails, it marks the instance store as invalid.
You can try <workflowIdle timeToUnload="0"/>, if it doesn't work you can look at the methods provided by other links.
Windows workflow 4.0 InstancePersistenceCommand Error
Why do I get exception "The execution of the InstancePersistenceCommand named LoadWorkflowByInstanceKey was interrupted by an error"
WF4 InstancePersistenceCommand interrupted

Intermittent problems starting Azure App Services: "500.37 ANCM Failed to Start Within Startup Time Limit"

Our app services are experiencing the problem, that they can’t be restarted by the hosting environment (ANCM).
The user is getting the following screen in that case:
Http Error 500.37
Our production subscription consists of up to 8 different app services and the problem can randomly harm one of them ore some of them.
The problem can occur several times a week, or just once a month.
The bootstrapping procedure of our app services is not time consuming.
The last occurrence of the problem has this log entries within the eventlog:
Failed to gracefully shutdown application 'MACHINE/WEBROOT/APPHOST/XXXXXXXXX'.
followed by:
Application '/LM/W3SVC/815681839/ROOT' with physical root 'D:\home\site\wwwroot' failed to load coreclr. Exception message: Managed server didn't initialize after 120000 ms
In most cases the problem can be resolved by manually stopping and starting the app service. In some cases we had to do that twice.
We are not able to reproduce that behavior locally.
The App Service Plan is S2 and we actually use just one instance.
The documentation of the Http error 500.37 recommends:
"You may need to stagger the startup process of multiple apps."
But there is no hint of how to do that.
How can we ensure that our app services are restarted without errors.
HTTP Error 500.37 - ANCM Failed to Start Within Startup Time Limit
You can try following approaches:
Approach 1: If possible, can try to move one app into a new App Service with a separate App Service plan, then check whether it can start as expected.
Please note that creating and using a separate App Service plan would be charged.
Approach 2: Increasing the startupTimeLimit attribute of the aspNetCore element.
For more information about the startupTimeLimit attribute, please check: https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module?view=aspnetcore-3.1#attributes-of-the-aspnetcore-element

Long running export to excel job in Aspnet Core 2.1 Azure App Service

I am having an angular 4 application with AspNetCore 2.1 Web Api on server side (for restful services). Application is hosted in Azure App service.
Application has a functionality of exporting a data in excel format. More than 100k rows are expected in the excel.Azure app service has a timeout limit of 3.8 minutes . If the request is going beyond 3.8 minutes, Azure load balancer will cancel the request and users often gets the error.
To resolve this issue, I have decided to move this task to background process and will provide updates to the user using SingalR till the time task is complete. Flow of the application will be following
User clicks on export to excel button.
AspNetCore API will handle this call and puts the request in a Azure topic.
Azure function will subscribe to the Azure topic, once it receives notification it will start processing data. It will fetch the data from Azure SQL.
Azure function will periodically talk to SignalR hub about the progress of task. SignalR hub will push notification to the client. Client will receive this notification and user will be aware about the progress of task.
Once the data is ready, Azure function will prepare excel and send it to SignalR hub. SignalR hub will push this file to the client.
I am not sure whether this is right approach. As per microsoft docs, one should avoid long running functions.
Also I read SignalR should be avoided to push the files.
is there any better solutions to achieve the functionality i.e. export the data to excel in background process and push it to client once it is ready
Usually in these kind of scenario, we offer customer near real time solution.
What you could do to resolve this issue:
1) Give a button click to export user data ( excel file with 100K rows or more).
2) Notify user , that user request for export is submitted.
3) Also add a refresh button functionality which will get the status of file export.
4) Have a web job behind the scene which will process your file and upload the processed file in azure storage, maybe in a blob.
5) Once the blob file is available , Update the status to completed.
6) Provide a link enabled option to download file which will be an endpoint url of your blob.
In that way you main thread won't b locked and screen will be responsive too.
If you don't want to have Refresh Button functionality to keep checking the report.You could utilize signalR to keep the connection alive and set a timeed option to keep checking your blob file. Once the file is available in the blob, simply update the label.
Hope it helps.
export the data to excel in background process and push it to client once it is ready
You could try to use azure webjob to run in background and continuously or use Azure Batch Service so you could long run to export data to excel and store it in Storage blob.
When the website is running, the associated jobs are running. And you could use a queueTrigger or httpTrigger in webjob, and call webjob from website. In general, we have to force the Azure website to be always on. Go to the Azure website application settings and turn on the Always On.
And as you have pointed out: SignalR is for real time messaging not uploading files. So you could use WebClient.UploadFile method to upload file to client.

RUN#Cloud consistently throws me out during a heavy operation

I'm using a large app instance to run a basic java web application (GWT + Spring). There's an expensive operation within my application (report) which takes a long time to execute.
I've tried running it with the cloudbees SDK on my local machine with similar settings as it would be on the cloud and it seems to function just fine. It runs in about 3-4 minutes.
On the cloud, it seems to be taking longer. The problem isn't the fact that it takes long. What happens in that cloudbees terminates the session after 5 minutes and gives me an error in my browser saying 'Unable to connect to server. Please contact your administrator'. A report which doesn't take as long runs just fine. My application has a session timeout of 30 minutes, so that isn't a problem either.
What could possibly be going wrong? Is it something to do with cloudbees?
This may be due to proxy buffering of your request through the routing layer (revproxy) - so it most likely isn't a session timeout - but the http connection getting cut.
You can either set proxyBuffering=false via the bees CLI command (eg when you deploy the app) - this will ensure longer running connections can work.
Ideally, however, you could change the app slightly to return to the browser with some token which you can poll with to get completion status, as even with a connection that lasts that long, over the internet it may provide a bad experience vs locally.