Change Key of HttpContext.Request.Query item in ASP.NET Core - asp.net-core

I am trying to work around a issue with a 3rd party filter. My current plan is to put a filter in front of that filter to "fix" the query string so it does not error out.
I made an ActionFilterAttribute and added it into the filter list. It is running fine. I am adding my logic in the OnActionExecuting method.
The first item of context.HttpContext.Request.Query has a Key that is a json structure. I need to change that Key to be {}.
Problem is that both context.HttpContext.Request.Query and context.HttpContext.Request.QueryString are read-only.
How can I alter the context.HttpContext.Request.Query or the context.HttpContext.Request.QueryString?
EDIT - The Underlying Problem:
BreezeJS did a minimal level upgrade to support .NET Core. In this upgrade, part of the code expects that every call that has any parameters to return an IQueryable (QueryFns.cs Line 32). From reading the code it seems like this is an error (the calling function (the actual filter) seems to just expect null to be returned not an Exception.)
Either way, this makes moving to .NET Core very hard.
I considered my other options and if this fails, I will continue to pursue them:
Submit a pull request to fix the issue: The project has not accepted any pull requests in over a year and a half. So it seems unlikely my request will be taken.
Fork my own branch: I would rather not have to create and maintain a separate version with my own build and publishing pipeline.
Find a way to make the Breeze filter ignore the call when the result is not an IQueryable: I am currently looking into this one. (This question.)
Find a way to send my call from the client differently so that breeze ignores calls that do not return IQueryable: The return type of the call is owned by the service. And this is an issue with the service. I would rather not have to have tight coupling between the service and the client such that the client is crafting workarounds for service filter issues.

Related

How to invoke a custom ResultFilter before ClientErrorResultFilter is executed in ASP.NET 6

I spent almost a full day debugging why my client can't post any forms, until I found out the anti-forgery mechanism got borked on the client-side and the server just responded with a 400 error, with zero logs or information (turns out anti-forgery validation is logged internally with Info level).
So I decided the server needs to special handle this scenario, however according to this answer I don't really know how to do that (aside from hacking).
Normally I would set up a IAlwaysRunResultFilter and check for IAntiforgeryValidationFailedResult. Easy.
Except that I use Api Controllers, so by default all results get transformed into ProblemDetails. So context.Result as mentioned here is always of type ObjectResult. The solution accepted there is to use options.SuppressMapClientErrors = true;, however I want to retain this mapping at the end of the pipeline. But if this option isn't set to true, I have no idea how to intercept the Result in the pipeline before this transformation.
So in my case, I want to do something with the result of the anti-forgery validation as mentioned in the linked post, but after that I want to retain the ProblemDetails transformation. But my question is titled generally, as it is about executing filters before the aforementioned client mapping filter.
Through hacking I am able to achieve what I want. If we take a look at the source code, we can see that the filter I want to precede has an order of -2000. So if I register my global filter like this o.Filters.Add(typeof(MyResultFilter), -2001);, then the filter shown here correctly executes before ClientErrorResultFilter and thus I can handle the result and retain the transformation after the handling. However I feel like this is just exploiting the open-source-ness of .Net 6 and of course as you can see it's an internal constant, so I have no guarantee the next patch doesn't change it and my code breaks. Surely there must be a proper way to order my filter to run before the api transform.

How to define REST API with to Parameter

I am currently working on a REST API for a project. In the process I should search for events. I would like to make an endpoint for searching events in a period. That is, specify two parameters with from - to.
For the search you normally take a GET operation. My question is now it makes sense to specify two parameters in the path or should I rather fall back to a POST operation for something like that.
Example for the path /Events{From}{To}
Is this even feasible with multiple parameters?
If you are not making a change on the resource, you should use GET operation.
More detailed explanation:
If you were writing a plain old RPC API call, they could technically interchangeable as long as the processing server side were no different between both calls. However, in order for the call to be RESTful, calling the endpoint via the GET method should have a distinct functionality (which is to get resource(s)) from the POST method (which is to create new resources).
GET request with multiple parameters: /events?param1=value1&param2=value2
GET request with an array as parameter: /events?param=value1,value2,value3

Update build definition in team foundation 2015 API

I am busy with configuring our new TFS 2015 server (on premises) and trying to get the new vnext builds to work properly.
What I now have are some extra powershell scripts that increase the version number of my assemblies.
It also changes the buildnumber in TFS by calling the API method (see tfs rest api). My json body only sends the new build number (eg. {"buildNumber": "1.0.1.1234"}) and this works fine.
Now I have added some major, minor and patch version variables in the build definition for the version. Once the build is done this should be updated and so I thought to do the same kind of thing and just send an update API call to the corresponding builddefinition endpoint. The documentation says the revision number is mandatory so I have added that. For the rest I only added the changed variables.
The api call works, but the nasty thing is that it will update the whole definition and clear out all the other settings which I did not provide in the json body. I also tried first getting the defintion through the API, changing the json values for the variables and send that back but that didnt work correct also.
So does anybody know a good solution for this?
As a workaround what I did for now is adding a dummy build definition (eg. "_ProjectVersion") totally empty except for the variables and my build task now uses that build definition to get the latest version numbers and update them. So the api call still empties that whole build definition but since it only contains my variables I dont mind.
I am also doing this in powershell since all scripting should be automated and done in powershell.
The problem I have is that the API call to get the json of an existing BuildDefinition returns invalid json when managed in powershell.
For example the "#{multipliers=[]; will fail with 'must have at least one value' even though a 'json validator' may report as Valid. The correct json is {"multipliers": "[]",

How do multiple versions of a REST API share the same data model?

There is a ton of documentation on academic theory and best practices on how to manage versioning for RESTful Web Services, however I have not seen much discussion on how multiple REST APIs interact with data.
I'd like to see various architectural strategies or documentation on how to handle hosting multiple versions of your app that rely on the same data pool.
For instance, suppose you make a database level destructive change to a database table that causes you to have to increment your major API version to v2.
Now at any given time, users could be interacting with the v1 web service and the v2 web service at the same time and creating data that is visible and editable by both services. How should this be handled?
Most of changes introduced to API affect the content of the response, till changes introduced are incremental this is not a very big problem (note: you should never expose the exact DB model directly to the clients).
When you make a destructive/significant change to DB model and new API version of API is introduced, there are two options:
Turn the previous version off, filter out all queries to reply with 301 and new location.
If 1. is impossible to need to maintain both previous and current version of the API. Since this might time and money consuming it should be done only for some time and finally previous version should be turned off.
What with DB model? When two versions of API are active at the same time I'd try to keep the DB model as consistent as possible - having in mind that running two versions at the same time is just temporary. But as I wrote earlier, DB model should never be exposed directly to the clients - this may help you to avoid a lot of problems.
I have given this a little thought...
One solution may be this:
Just because the v1 API should not change, it doesn't mean the underlying implementation cannot change. You can modify the v1 implementation code to set a default value, omit the saving of a field, return an unchecked exception, or do some kind of computational logic that helps the v1 API to be compatible with the shared datasource. Then, implement a better, cleaner, more idealistic implementation in v2.
when you are going to change any thing in your API structure that can change the response, you most increase you'r API Version.
for example you have this request and response:
request post: a, b, c, d
res: {a,b,c+d}
and your are going to add 'e' in your response fetched from database.
if you don't have any change based on 'e' in current client versions, you can add it on your current API version.
but if you'r new changes are going to change last responses, for example:
res: {a+e, b, c+d}
you most increase API number to prevent crashing.
changing in the request input's are the same.

Unloading a data reference in Entity Framework

Currently I am struggling with a Entity Framework issue. I have a wcf service that sits on top of the ef framework, and allows queries to the framework. At some point the user is able to request a file from the framework. The files are referenced by solution entries, so when you request a file from a solution, the reference is loaded to gain access to the file store.
That all works fine, but from that point onward, whenever you do another query that returns that solution entry, the whole file gets attached to the return result. I need some way of detaching or unloading the reference, such that the result entries will only contain an unloaded reference to the file store again.
I have tried to create a new context and query that context to retrieve information from, but when I do that, the entity in the original context is also changed.
I have tried to detach the entity from the original context and then query from the new context. That does not work either.
I have found one way of doing this. For all the non file-download queries, I detach the result entity, and send that over the wire. I am not sure if that is the best way to go about it though.
I hope someone might be able to provide some insight, thanks for the effort.
The issue you are experiencing is probably do to Change Tracking, which is on by default.
Possible Solution:
Disable Change Tracking with MergeOption.NoTracking
using (MyEntities _context = new MyEntities())
{
_context.Widgets.MergeOption = MergeOption.NoTracking;
return _context.Widgets.ToList();
}
This article may help to point you in the right direction on how to handle this issue if the solution above does not work.
I struggled with a similar issue recently. The problem was the result of the context maintaining a reference to the object I was using (obviously). Every time I made a change to an object of the same type, even when obtained with a new context (so I thought), the object was being changed.
With help from one of my peers, we determined the context was hanging around due to the way I was registering it with my IoC container (lifestyle per web request). When I changed the lifestyle to transient (which definitively provided a new instance) then changes to objects of the same type were unaffected.
Hope this helps.