Asp .Net Core Controller Task timing out - asp.net-core

I have an ASP .NET Core 3.1 website that imports data. Prior to importing, I want to run some validation on the import. So the user selects the file to import, clicks 'Validate', then either gets some validation error messages so they can fix the import file, or allows them to import.
The problem I am running into is around the length of time these validation and import processes take. If the file is small, everything works as expected. If the file is larger (Over 1000) records the validation and/or may take several minutes. On my local machine, or a server on my network, this works fine. On my actual public facing website, I am getting:
503 first byte timeout
So, I need some strategies for getting around this. Turning up the timeout time seems like a rabbit hole? It looks like BackgroundService/IHostedService is probably the best way to go? But I cant seem to find an example of how to do this in the way I would like:
Call "Validate" via AJAX
Turn on a loader
Perform validation
Turn off loader
Display either success or list of errors to user
##UPDATE## -
Validation -
Ajax call to controller
Controller calls Business Logic code
a. Check file extension
b. check file size
c. Read in .csv with CsvHelper
d. Check that all required columns are present
c. Check that required columns contain valid data - length, no whitespace, valid zip code, valid phone, etc...
d. Check for internal duplicates
e. If append (as opposed to overwrite) check for duplicates in database - this is the slow step
So, is a better solution to speed up the validation process? Is BackgroundService overkill?

Related

How to invoke a custom ResultFilter before ClientErrorResultFilter is executed in ASP.NET 6

I spent almost a full day debugging why my client can't post any forms, until I found out the anti-forgery mechanism got borked on the client-side and the server just responded with a 400 error, with zero logs or information (turns out anti-forgery validation is logged internally with Info level).
So I decided the server needs to special handle this scenario, however according to this answer I don't really know how to do that (aside from hacking).
Normally I would set up a IAlwaysRunResultFilter and check for IAntiforgeryValidationFailedResult. Easy.
Except that I use Api Controllers, so by default all results get transformed into ProblemDetails. So context.Result as mentioned here is always of type ObjectResult. The solution accepted there is to use options.SuppressMapClientErrors = true;, however I want to retain this mapping at the end of the pipeline. But if this option isn't set to true, I have no idea how to intercept the Result in the pipeline before this transformation.
So in my case, I want to do something with the result of the anti-forgery validation as mentioned in the linked post, but after that I want to retain the ProblemDetails transformation. But my question is titled generally, as it is about executing filters before the aforementioned client mapping filter.
Through hacking I am able to achieve what I want. If we take a look at the source code, we can see that the filter I want to precede has an order of -2000. So if I register my global filter like this o.Filters.Add(typeof(MyResultFilter), -2001);, then the filter shown here correctly executes before ClientErrorResultFilter and thus I can handle the result and retain the transformation after the handling. However I feel like this is just exploiting the open-source-ness of .Net 6 and of course as you can see it's an internal constant, so I have no guarantee the next patch doesn't change it and my code breaks. Surely there must be a proper way to order my filter to run before the api transform.

Change Key of HttpContext.Request.Query item in ASP.NET Core

I am trying to work around a issue with a 3rd party filter. My current plan is to put a filter in front of that filter to "fix" the query string so it does not error out.
I made an ActionFilterAttribute and added it into the filter list. It is running fine. I am adding my logic in the OnActionExecuting method.
The first item of context.HttpContext.Request.Query has a Key that is a json structure. I need to change that Key to be {}.
Problem is that both context.HttpContext.Request.Query and context.HttpContext.Request.QueryString are read-only.
How can I alter the context.HttpContext.Request.Query or the context.HttpContext.Request.QueryString?
EDIT - The Underlying Problem:
BreezeJS did a minimal level upgrade to support .NET Core. In this upgrade, part of the code expects that every call that has any parameters to return an IQueryable (QueryFns.cs Line 32). From reading the code it seems like this is an error (the calling function (the actual filter) seems to just expect null to be returned not an Exception.)
Either way, this makes moving to .NET Core very hard.
I considered my other options and if this fails, I will continue to pursue them:
Submit a pull request to fix the issue: The project has not accepted any pull requests in over a year and a half. So it seems unlikely my request will be taken.
Fork my own branch: I would rather not have to create and maintain a separate version with my own build and publishing pipeline.
Find a way to make the Breeze filter ignore the call when the result is not an IQueryable: I am currently looking into this one. (This question.)
Find a way to send my call from the client differently so that breeze ignores calls that do not return IQueryable: The return type of the call is owned by the service. And this is an issue with the service. I would rather not have to have tight coupling between the service and the client such that the client is crafting workarounds for service filter issues.

Quickly-editable config file, not visible to public visitors?

I am in the middle of working with, and getting a handle on Vuejs. I would like to code my app in a way that it has some configurable behaviors, so that I could play with parameter values, like you do when you edit your Sublime preferences, but without having to compile my app again. Ideally, I would want a situation where I could have my colleagues be able to fiddle with settings all day long, by editing a file over FTP maybe, or over some interface....
The only way I know how to do it now, is to place those settings in a separate file, but as the app runs in the client, that file would have to be fetched via another HTTP request, meaning it's a publicly readable file. Even though there isn't any sensitive information in such a configuration file, I still feel a little wonky about having it public like that, if it can be avoided in any way...
Can it be avoided?
I dont think you can avoid this. One way or another your config file will be loaded into the vuejs application, therefore being visible to the end user (with some effort).
Even putting the file outside of the public folder wouldnt help you much, because then it is unavailable for HTTP to request the file. It would only be available to your compile process in this case.
So a possible solution could be to have some sort of HTTP request that requests GET example.com/settings and returns you a JSON object. Then you could have your app make a cookie like config_key = H47DXHJK12 (or better a UUID https://en.wikipedia.org/wiki/Universally_unique_identifier) which would be the access key for a specific config object.
Your application must then request GET example.com/settings (which should send the config_key cookie), and your config object for this secret key will be returned. If the user clears his cookies, a new request will return an empty config.

How to update file upload messages using backbone?

I am uploading multiple files using javascript.
After I upload the files, I need to run several processing functions.
Because of the processing time that is required, I need a UI on the front telling the user the estimated time left of the entire process.
Basically I have 3 functions:
/upload - this is an endpoint for uploading the files
/generate/metadata - this is the next endpoint that should be triggered after /upload
/process - this is the last endpoint. SHould be triggered after /generate/metadata
This is how I expect the screen to look like basically.
Information such as percentage remaining and time left should be displayed.
However, I am unsure whether to allow server to supply the information or I do a hackish estimate solely using javascript.
I would also need to update the screen like telling the user messages such as
"currently uploading"
if I am at function 1.
"Generating metadata" if I am at function 2.
"Processing ..." if I am at function 3.
Function 2 only occurs after the successful completion of 1.
Function 3 only occurs after the successful completion of 2.
I am already using q.js promises to handle some parts of this, but the code has gotten scarily messy.
I recently come across Backbone and it allows structured ways to handle single page app behavior which is what I wanted.
I have no problems with the server-side returning back json responses for success or failure of the endpoints.
I was wondering what would be a good way to implement this function using Backbone.js
You can use a "progress" file or DB entry which stores the state of the backend process. Have your backend process periodically update this file. For example, write this to the file:
{"status": "Generating metadata", "time": "3 mins left"}
After the user submits the files have the frontend start pinging a backend progress function using a simple ajax call and setTimeout. the progress function will simply open this file, grab the JSON-formatted status info, and then update the frontend progress bar.
You'll probably want the ajax call to be attached to your model(s). Have your frontend view watch for changes to the status and update accordingly (e.g. a progress bar).
Long Polling request:
Polling request for updating Backbone Models/Views
Basically when you upload a File you will assign a "FileModel" to every given file. The FileModel will start a long polling request every N seconds, until get the status "complete".

WCF Paged Results & Data Export

I've walked into a project that is using a WCF service for the data tier. Currently, when data is needed for a grid, all rows are returned and the results are bound to a grid and the dataset is stuffed into a session variable for paging/sorting/rebinding. We've already hit a max message size problem, so I'm thinking it's time to convert from fetch and cache to fetch only the current page.
Face value this seems easy enough, but there's a small catch. The user is allowed to export the entire result set at any point. This means that for grid viewing purposes fetching the current page is fine, but when they want to do an export, I still need to make a call for all data.
This puts me back into the max message size issue. What is the recommended approach for this type of setup?
We are currently using the wsHttpBinding...
Thanks for any assistance.
I think the recommended approach for large files is to use WCF streaming. I'm not sure the exact details for your scenario, but you could take a look at this as a starting point:
http://msdn.microsoft.com/en-us/library/ms789010.aspx
I would probably do something like this in your case
create a service with a "paged" GetData() method - where you specify the page index and the page size as additional parameters. This should give you a nice clean interface for "regular" use, and that should not hit the maxMessageSize limits
create a second service (or method) that would send all data - ideally, you could bundle that up into a ZIP file or something on the server, before sending it. If that ZIP file is still too large, you might want to check out WCF streaming for handling large files, as Andy already pointed out
The maxMessageSizeLimit is in place for a good reason: to avoid Denial of Service attacks where a WCF service would just get flooded with large messages and thus brought to its knees. If you can, always try to keep that in mind and don't just jack up the maxMessageSize to 2 GB - it might come back to bite you :-)