Changing the ASPNETCORE_ENVIRONMENT value from Code - asp.net-core

I want to change the ASPNETCORE_ENVIRONMENT value from code while initializing the application. This is done with
Environment.SetEnvironmentVariable("ASPNETCORE_ENVIRONMENT", [newValue]);
and happens very early on startup of the application, prior to any calls to Use[XY].
If I check the value with Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") throughout the lifetime of the application, it returns the desired value.
However, the Environment-Tag helper always uses the initial value which was set at the application's very startup (the old value).
While googling, I found a lot of posts on how to change the "ASPNETCORE_ENVIRONMENT" value from outside of the application, but none on how it is done from within the application.
The background of this question is, that we have the information about which stage the application runs, saved within the database and we want to use this information.
Update
It seems, that I have found the solution, will post it, if it is reliable.

Asp.net Core initializes at the application’s start the IWebHostEnvironment implementing instance and registers it as a singleton. This interface has a read/write property EnvironmentName, which is then used throughout the application’s lifetime as the “environment” value.
Hence the solution was:
var webHostEnvironment = [ServiceCollection].GetRequiredService<IWebHostEnvironment>();
webHostEnvironment.EnvironmentName=[my value];
Important is surely, that the assignment is as early as possible, so that in the Configure-method of the application, it is already set to its target value.
I tested some scenarios and up to now, I haven’t found any problems.

Related

Is it a good practice the attach an event related parameter to an object's model as a variable?

This is about an API handling the validation during saving an object. Which means that the front-end client sends a request to the API to a specific end point, then on the back-end the API creates a new object if the right conditions are meet.
Right now the regular method that we use is that the models has a ruleset for each fields and then the validation is invoked when the save function is invoked, but technically the validation is done right before the object is saved into the database.
Then during today's code review I came across a solution which I wasn't sure if it's a good practice or not. And it was about that the front-end must send a specific parameter to the API every time. This is because other APIs are using our API as well, and we needed to know if the request was sent as and API request or a browser request. If this parameter is present then we want to execute an extra validation function on a specific field.
(1)If I would have to implement it, then I would check the incoming parameter in the service handler or in the controller level, and if I got one, I would invoke the validation right away, and if it fails I would throw an error.
(2)The implementation I saw however adds an extra variable to the model, and sets the model variable when there is an incoming parameter, then validates only when the save function is invoked on the object(which first validates the ruleset defined on the object fields, then saves the object into the database)
So my problem with (2) is that the object now grown bigger with an extra variable that is only related to a specific event. So I would say it's better to implement (1). But (2) also has an advantage, and that is when you create the object on different end point by parsing the parameters, then the validation will work there as well, even if the developer forget to update the code there.
Now this may seems like a silly question because, why would I care about just 1 extra variable, but this is like a bedrock of something good or bad. So if I say this is ok, then from now on the models will start growing with extra variables that are only related to specific events, which I think should be handled on the controller/service handler level. On the other hand the code would be more reliable if it's not the developer who should remember all the 6712537 functionalities and keep them in mind when makes some changes somewhere. Let's say all the devs will get heart attack tomorrow from the excitement of an amazing discovery, and a new developer has to work on the project while he doesn't know about these small details, and then he has to change something on the code that is related to this functionality - so that new feature should be supported by this old one as well.
So my question is if is there any good practice on this, and what do you think what would be the best approach?
So I spent some time on thinking on the solution, and I think the best is to have an array of acceptable trigger variables in the model class. Then when the parameters are passed to the model on the controller level, then the loader function can be modified that it takes the trigger variables from the parameters and save it in the model's associative array variable that stores the trigger variables.
By default this array is empty, and it doesn't matter how much new variables are needed to be created, it will only contain the necessary ones when those are used.
Then of course the loader function needs to be modified in a way that it can filter out the non trigger variables as well as it is done for the regular fields, and there can be even a rule set of validation on the trigger variables if necessary.
So this solves the problem with overgrowing the object with unnecessary variables and the centralized validation part, because now the validation can be always done in the model instead of the controller.
And since the loader function is modified to store the trigger variables in the model's trigger variables array variable, the developer never has to remember that this functionality was created. Which is good, because in the future when he creates a new related function or end point that should handle object creation, he will not miss it to validate it against the old functionality, because the the loader function that he modified in the past like this will handle it for him.
It needs to be noted tho, that since the loader function doesn't differentiate between the parameters, and where to load them other then checking the names of the parameters with the filter functions, these parameter names should be identical from each other, otherwise a buggy functionality can be created accidentally. Like if you forget that a model attribute with the same name was used, then you can accidentally trigger an event that was programmed to be triggered if the trigger variable with the same name is present. However this can be solved by prefixing the trigger variables for example.

Change Key of HttpContext.Request.Query item in ASP.NET Core

I am trying to work around a issue with a 3rd party filter. My current plan is to put a filter in front of that filter to "fix" the query string so it does not error out.
I made an ActionFilterAttribute and added it into the filter list. It is running fine. I am adding my logic in the OnActionExecuting method.
The first item of context.HttpContext.Request.Query has a Key that is a json structure. I need to change that Key to be {}.
Problem is that both context.HttpContext.Request.Query and context.HttpContext.Request.QueryString are read-only.
How can I alter the context.HttpContext.Request.Query or the context.HttpContext.Request.QueryString?
EDIT - The Underlying Problem:
BreezeJS did a minimal level upgrade to support .NET Core. In this upgrade, part of the code expects that every call that has any parameters to return an IQueryable (QueryFns.cs Line 32). From reading the code it seems like this is an error (the calling function (the actual filter) seems to just expect null to be returned not an Exception.)
Either way, this makes moving to .NET Core very hard.
I considered my other options and if this fails, I will continue to pursue them:
Submit a pull request to fix the issue: The project has not accepted any pull requests in over a year and a half. So it seems unlikely my request will be taken.
Fork my own branch: I would rather not have to create and maintain a separate version with my own build and publishing pipeline.
Find a way to make the Breeze filter ignore the call when the result is not an IQueryable: I am currently looking into this one. (This question.)
Find a way to send my call from the client differently so that breeze ignores calls that do not return IQueryable: The return type of the call is owned by the service. And this is an issue with the service. I would rather not have to have tight coupling between the service and the client such that the client is crafting workarounds for service filter issues.

UseLegacyPathHandling is not loaded properly from app.config runtime element

I'm trying to use the new long path support at my app. In order to use it, without forcing clients to have the newest .NET 4.6.2 version instelled on their machines, one should only add those elements to his app.config (see the link for more info https://blogs.msdn.microsoft.com/dotnet/2016/08/02/announcing-net-framework-4-6-2/) :
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.1"/>
</startup>
<runtime>
<AppContextSwitchOverrides value="Switch.System.IO.UseLegacyPathHandling=false" />
</runtime>
When I use it in my execution project it works perfectly. The problem is in my testing projects (which use Nunit). I've added app.config to my test project in the same way I've added it to the execution project.
Using the ConfigurationManager class I've managed to ensure that app config indeed loaded (in short: using an app setting which i was able to retrieve in a unit test).
Using ConfigurationManager.GetSection("runtime"), I even managed to ensure the runtime element has been loaded properly (_rawXml value is the same as in app.config).
But (!) for some reason the app config runtime element is not influencing the UseLegacyPathHandling variable and therefore all of my calls with long path fail.
I guess the problem is somehow relates to the fact that testing projects become dll's that are loaded using the Nunit engine, which is the execution entry point.
I'm facing the exact same problem in another project I have, which is a dll loaded by Office Word application. I believe the problem is the same in both cases and derived from the fact that the projects are not meant to be an execution entry point.
It's important to understand that I've no access to the executions their self (Word Office or Nunit) and therefore I can't configure them myself.
Is there an option to somehow make the AppContextSwitchOverrides get loaded from scratch dynamically? Other ideas will be most welcome.
I've been having the same issue, and have noted the same lack of loading of that particular setting.
So far, what I've got is that the caching of settings is at least partly to blame.
If you check out how it's implemented, disabling the cache has no effect on future calls to values (i.e. if caching is enabled and something is accessed during that time, then it will always be cached).
https://referencesource.microsoft.com/#mscorlib/system/AppContext/AppContext.cs
This doesn't seem to be an issue for most of the settings, but for some reason the UseLegacyPathHandling and BlockLongPaths settings are getting cached by the time I can first step into the code.
At this time, I don't have a good answer, but if you need something in the interim, I've got a highly suspect temporary fix for you. Using reflection, you can fix the setting in the assembly init. It writes to private variables by name, and uses the specific value of 0 to invalidate the cache, so it's a very delicate fix and not appropriate for a long term solution.
That being said, if you need something that 'just works' for the time being, you can check the settings, and apply the hack as needed.
Here's a simple example code. This would be a method you'll need in your test class.
[AssemblyInitialize]
public static void AssemblyInit(TestContext context)
{
// Check to see if we're using legacy paths
bool stillUsingLegacyPaths;
if (AppContext.TryGetSwitch("Switch.System.IO.UseLegacyPathHandling", out stillUsingLegacyPaths) && stillUsingLegacyPaths)
{
// Here's where we trash the private cached field to get this to ACTUALLY work.
var switchType = Type.GetType("System.AppContextSwitches"); // <- internal class, bad idea.
if (switchType != null)
{
AppContext.SetSwitch("Switch.System.IO.UseLegacyPathHandling", false); // <- Should disable legacy path handling
// Get the private field that is used for caching the path handling value (bad idea).
var legacyField = switchType.GetField("_useLegacyPathHandling", System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.NonPublic);
legacyField?.SetValue(null, (Int32)0); // <- caching uses 0 to indicate no value, -1 for false, 1 for true.
// Ensure the value is set. This changes the backing field, but we're stuck with the cached value for now.
AppContext.TryGetSwitch("Switch.System.IO.UseLegacyPathHandling", out stillUsingLegacyPaths);
TestAssert.False(stillUsingLegacyPaths, "Testing will fail if we are using legacy path handling.");
}
}
}

Changing Mule Flow Threading Profile at runtime

I have a requirement in hand where I need to change the Mule Flow Threading Behavior at runtime without the need of bouncing the whole Mule Container. I figured out few different ways to achieve this, but none of them are working.
I tried accessing the Mule Context Registry and from there I was trying to do a lookup of "FlowConstructLifecycleManager" Object so that I can tap in there and access the threading profile of the object and reset those values, then stop and start the flow programmatically in order to get the change applied in the flow. I am stuck in this approach as I was unable to get hold of the FlowConstructLifecycleManager Object neither from the Mule Spring Registry nor from the Transient Registry. I was able to get hold of the Flow object though which has a direct reference to that FlowConstructLifecycleManager Object. But, unfortunately, they made this object as protected and didn't expose any method for us to access this object.
Since I was unable to access this FlowConstructLifecycleManager directly from Mule implemented Flow class, I decided to extend this Flow class and just add another public method to it so that I can access FlowConstructLifecycleManager object from Flow object programmatically. But, I am stuck in this approach as well as even if I am putting my version of the same Flow class packaged and dropped in lib/user folder of the container, it is still not picking up my version of the class, and loading the original version instead.
It would be of great help if I can get any pointer on the approach of solving either my first or second problem.
Thanks in advance,
Ananya
In our company, we are building a dashboard from where we should be able to start/stop any flow or change the processing power of any flow by increasing/ decreasing the active threads for a flow or changing the pollen polling frequency. All of these should be done at runtime without any server downtime.
Anyway, I made it working finally. I had to patch up the mule-core jar and expose few objects so that I can get to the thread profile object and tweak the values at runtime and stop/ start the flow to reflect the changes to take effect. I know this is little bit messy and but it works.
Thanks,
Ananya

Internal fault in MS auto-generated method of WCF

I have a problem with WCF. My testing code is pretty simple.
I call a service layer method on my server from my silverlight application and print the result in a textbox.
Everything of this is surrounded by try-catch.
When my service layer method simply returns a constantly defined string there seems to be no problems - however as soon as it calls a more complex method it fails.
While debugging it does not even reach the complex model method; it fails before that inside some auto-generated code from microsoft:
/WuSIQ.jpg
As the error message "NotFound" is not exactly the most helpful or specific you can imagine my trouble googling for hints.
I thought maybe the auto-generated code could only send simple data so I made a temporary string and returned that, but this did not help.
I have already: a client access policy, a service reference added, removed duplicate reference in ServiceReferences.ClientConfig and a ServiceLayer.svc.cs.
I am debugging by running from the main window and my breakpoints are picked up.
Anyone?
I had some errors in the server side method that were quickly found after debugging was fixed.
I fixed this, as I said in comments, setting the project to have "Multiple Startup Projects".
Whenever I had troubles with updating the WCF service methods one of these usually solved it all:
1 Delete all bin and obj folders (specifically selecting re-build might do the same).
2 The servicelayer will not succesfully auto-update (but will work!) unless this:
[ServiceContract(Namespace = "")]
... is set to this:
[ServiceContract(Namespace = "YourServiceLayerName")]
3 Right clicking on the servicereference and selecting "update...".
Sometimes it would stop debugging again, but a forced full re-build would return it to normal.
I hope this helps someone.