Long running export to excel job in Aspnet Core 2.1 Azure App Service - asp.net-core

I am having an angular 4 application with AspNetCore 2.1 Web Api on server side (for restful services). Application is hosted in Azure App service.
Application has a functionality of exporting a data in excel format. More than 100k rows are expected in the excel.Azure app service has a timeout limit of 3.8 minutes . If the request is going beyond 3.8 minutes, Azure load balancer will cancel the request and users often gets the error.
To resolve this issue, I have decided to move this task to background process and will provide updates to the user using SingalR till the time task is complete. Flow of the application will be following
User clicks on export to excel button.
AspNetCore API will handle this call and puts the request in a Azure topic.
Azure function will subscribe to the Azure topic, once it receives notification it will start processing data. It will fetch the data from Azure SQL.
Azure function will periodically talk to SignalR hub about the progress of task. SignalR hub will push notification to the client. Client will receive this notification and user will be aware about the progress of task.
Once the data is ready, Azure function will prepare excel and send it to SignalR hub. SignalR hub will push this file to the client.
I am not sure whether this is right approach. As per microsoft docs, one should avoid long running functions.
Also I read SignalR should be avoided to push the files.
is there any better solutions to achieve the functionality i.e. export the data to excel in background process and push it to client once it is ready

Usually in these kind of scenario, we offer customer near real time solution.
What you could do to resolve this issue:
1) Give a button click to export user data ( excel file with 100K rows or more).
2) Notify user , that user request for export is submitted.
3) Also add a refresh button functionality which will get the status of file export.
4) Have a web job behind the scene which will process your file and upload the processed file in azure storage, maybe in a blob.
5) Once the blob file is available , Update the status to completed.
6) Provide a link enabled option to download file which will be an endpoint url of your blob.
In that way you main thread won't b locked and screen will be responsive too.
If you don't want to have Refresh Button functionality to keep checking the report.You could utilize signalR to keep the connection alive and set a timeed option to keep checking your blob file. Once the file is available in the blob, simply update the label.
Hope it helps.

export the data to excel in background process and push it to client once it is ready
You could try to use azure webjob to run in background and continuously or use Azure Batch Service so you could long run to export data to excel and store it in Storage blob.
When the website is running, the associated jobs are running. And you could use a queueTrigger or httpTrigger in webjob, and call webjob from website. In general, we have to force the Azure website to be always on. Go to the Azure website application settings and turn on the Always On.
And as you have pointed out: SignalR is for real time messaging not uploading files. So you could use WebClient.UploadFile method to upload file to client.

Related

asp.net core execute with url

I have created a ASP.NET Core 3.1 WebAPI project, where I have also implemented some API calls to external api services, which will then import and save to my local database.
Lets say, I have a route "/imports/mymethod", which will trigger the import run.
Now I want to make this as windows scheduled task so it runs for example every 10 minutes.
When I execute the "exe" file "myproject.exe" it says:
Microsoft.Hosting.Lifetime[0]
Now listening on: http://localhost:5000"
Is it possible to compile my project and run the compiled exe file with parameters like "myproject.exe -path http://localhost:5000/imports/mymethod" then it will run the route with the method?
Or is there other way I can achieve this?
Best regards
If you are running this api just for study purpose on your local machine, following is the easiest option.
1) Extract the import logic behind the import route and call this logic using a console application and run this function every 10 minutes using a scheduled tasks. Important: Do this if only running on a local machine and if you are not actual planning to deploy this to production
IF you want to have this on production environment, then you have few options
1) You can write a webjob which is deployed with your web api and is triggered every 10 mins which will then import the data to your database. Refer Run background tasks with WebJobs in Azure App Service. You can write the logic inside the webjob to import the data and insert into the database
2)Use a time triggered azure function to call the api from where you want to import the data into the database. This approach though sound adds another project that you will have to manaage
3) Create a timer triggered logic app which executes the same logic as azure function mentioned in 2
4) Actual expose your route as "/imports/mymethod" in you web api and then on your local machine create a powershell script or a batch file which invokes the web api and schedukle the powershell script or batch file to be run every 10 mins using windows scheduler
My personal preference is to use webjobs as webjobs were created specifically to run background tasks.

internal server error connecting to Aurora

In VS 2017 I created a AWS Serverless Application (.NET Core - C#). I have a RDS (Aurora) with data in it.
I added MySql.Data to the project using NuGet.
Created a new controller to get the data out of the DB.
Created a method and model to Get data.
Built the project and ran it locally in VS.
I was able to use Postman to Get data from the API. GREAT!
Right clicked the project and selected Publish to AWS Lambda. Everything published and got the new URL.
when using the url/api/method. I get a 500 return. Tried another Controller that just returns values with no DB queries and that works. Any ideas?
First thing you should do is check the CloudWatch logs for your function for the source of the error (as 500 indicates an Internal Server Error i.e. your code throws an exception). Add logging into your function as needed if you don't get anything useful there.
Access control is one likely candidate. Is your database accessible from your lambda and does the deployed function correctly receive the database credentials?

Android service needs to periodically access internet

I need to access the internet periodically (every 5 minutes or so) to update a database with a background service. I have tried the following using Android 8 or 9:
Use a boot receiver to start a JobService.
In onStartJob of the class that extends JobService, create a LocationListener and request a single location update.
Have onStartJob schedule the job again to run in 5-10 minutes.
Return true from onStartJob.
In the OnLocationChanged of the LocationListener, write to a local file, and start a thread to make a PHP request to update the database.
Everything works fine while the underlying process is running. When the process dies, the service keeps periodically updating the local file, but the the URLConnection.getResponseCode() now throws an exception - java.net.ConnectionException: failed to connect to ...
Is there a way to get around this using the above approach? If not, how can I have a background service access the internet even after the underlying process dies?

Simulate Access disable feature in Worklight , when worklight server itself is down.

I am trying show end users maintainence window such as "we are down please try later" and disable the application but my problem is what if my worklight server itself is down and not reachable and i cannot use the feature provided by worklight console,
Is there a way i make my app talk to a different server which returns back the below json data when a app is disabled , can i simulate this behaviour is this possible.
json recieved on access disabled in worklight :-
/*-secure-
{"WL-Authentication-Failure":{"wl_remoteDisableRealm":{"message”:”We are down, Please try again soon","downloadLink":null,"messageType":"BLOCK"}}}*/
I have some conceptual problems with this question.
Typically a production environment (simplified) would not consist of a single server serving your end-users... meaning, there would be a cluster of nodes, each node being a Worklight Server, and this cluster would be behind a load balancer that would direct the incoming requests. And so in a situation where a node is down for maintenance like in your scenario there would still be more servers able to serve - there would be no down time.
And thus at this point your suggestion to simulate a Remote Disable by sending it from another(?) Worklight Server seems not so much the correct path to take (it may even be simply wrong). Have you had this second Worklight Server, why wouldn't it just serve the apps business like usual? See again my first paragraph about clustering.
Now lets assume there is still a downtime, that affects all servers. The application's client logic should be able to handle failed connections to the Worklight Server. In such a case you should handle this in the WL.Client.connect()'s onFailure callback function to display a WL.SimpleDialog that looks just like a Remote Disable's dialog... or perhaps via the initOption.js's onConnectionFailure callback.
Bottom line: you cannot simulate the JSON that is sent back for the wl_RemoteDisable realm; it is part of a larger security mechanism.
Additionally though, perhaps a way to better handle maintenance mode on your server is to have the HTTP server return a specific HTTP status code, check for this code and display a proper message based on the returned HTTP status code.
To check for this code in a simple example:
Note: the getStatus method is available starting MobileFirst Platform Foundation 7.0 (formerly "Worklight").
function wlCommonInit(){
WL.Client.connect({onSuccess:success, onFailure:failure});
}
function success(response) {
// ...
}
function failure(response) {
if (response.getStatus() == "503") {
// site is down for maintenance - display a proper message.
} else if ...
}

Windows Azure Console for Worker Role Cloud Service

I have a worker role cloud service that I have recently developed on my local machine. The service exposes a WCF interface that receives a file as a byte array, recompiles the file, converts it to the appropriate format, then stores it in Azure Storage. I managed to get everything working using the Azure Compute Emulator on my machine and published the service to Azure and... nothing. Running it on my machine again, it works as expected. When I was working on it on my computer, the Azure Compute Emulator's console output was essential in getting the application running.
Is there a similar functionality that can be tapped into on the Cloud Service via RDP? Such as starting/restarting the role at the command prompt or in power shell? If not, what is the best way to debug/log what the worker role is doing (without using Intellitrace)? I have diagnostics enabled in the project, but it doesn't seem to be giving me the same level of detail as the Computer Emulator console. I've rerun the role and corresponding .NET application again on localhost and was unable to find any possible errors in the console.
Edit: The Next Best Thing
Falling back to manual logging, I implemented a class that would feed text files into my Azure Storage account. Here's the code:
public class EventLogger
{
public static void Log(string message)
{
CloudBlobContainer cbc;
cbc = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("StorageClientAccount"))
.CreateCloudBlobClient()
.GetContainerReference("errors");
cbc.CreateIfNotExist();
cbc.GetBlobReference(string.Format("event-{0}-{1}.txt", RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).UploadText(message);
}
}
Calling ErrorLogger.Log() will create a new text file and record whatever message you put in there. I found an example in the answer below.
There is no console for worker roles that I'm aware of. If diagnostics isn't giving you any help, then you need to get a little hacky. Try tracing out messages and errors to blob storage yourself. Steve Marx has a good example of this here http://blog.smarx.com/posts/printf-here-in-the-cloud
As he notes in the article, this is not for production, just to help you find your problem.