How can we pass extra parameters when we call IFeatureManager.IsEnabled("featurename") which triggers IFeatureFilter.Evaulate method - azure-app-configuration

I am learning Azure App Configuration - Feature manager.
I am trying to understand if a custom feature filter that we write needs to use some parameters which are not part of HttpContext.
How can we pass extra parameters when we call IFeatureManager.IsEnabled("featurename") which triggers Evaulate method of custom filter.
But how an azure function app or webjob will use it.
[FilterAlias("AllowedUsers")]
public class AllowedUsersFeatureFilter : IFeatureFilter
{
private readonly IHttpContextAccessor _httpContextAccessor;
public AllowedUsersFeatureFilter(IHttpContextAccessor httpContextAccessor)
{
_httpContextAccessor = httpContextAccessor;
}
// HOW CAN WE PASS SOME parameter when we call IFeatureManager.IsEnabled("featurename")
public bool Evaluate**(FeatureFilterEvaluationContext context)**
{
var featureFilterParams = context.Parameters.Get<AllowedUsersFilterSettings>();
if (featureFilterParams == null)
return false;
var userEmail = _httpContextAccessor.HttpContext.User?.FindFirst(ClaimTypes.Upn)?.Value;
var alias= userEmail?.Split('#').First();
return featureFilterParams.Aliases.Split(',').Contains(alias, StringComparer.OrdinalIgnoreCase);
}
}

There is an issue open for this on the FeatureManagement repository. https://github.com/microsoft/FeatureManagement-Dotnet/issues/2. With the initial preview this is not possible. It should be in by next release.
Currently using AsyncLocal to flow an execution context would be a possible solution, however it is a work-around until the actual capability to pass in the context is available.

Related

Why documentt.data.getValue() gives empty string? [duplicate]

A custom object that takes a parameter of (DocumentSnapShot documentsnapShot). also is an inner object from Firebase that retrieves a snapshot and set the values to my custom model also have its argument (DocumentSnapShot documentsnapShot). However, I wish to get the data from Firebase and pass it to my custom argument because mine takes multiple data not only Firebase. And it's not possible to iterate Firestore without an override.
Here's the code:
public UserSettings getUserSettings(DocumentSnapshot documentSnapshot){
Log.d(TAG, "getUserSettings: retrieving user account settings from firestore");
DocumentReference mSettings = mFirebaseFirestore.collection("user_account_settings").document(userID);
mSettings.get().addOnSuccessListener(new OnSuccessListener<DocumentSnapshot>() {
#Override
public void onSuccess(DocumentSnapshot documentSnapshot) {
UserAccountSettings settings = documentSnapshot.toObject(UserAccountSettings.class);
settings.setDisplay_name(documentSnapshot.getString("display_name"));
settings.setUsername(documentSnapshot.getString("username"));
settings.setWebsite(documentSnapshot.getString("website"));
settings.setProfile_photo(documentSnapshot.getString("profile_photo"));
settings.setPosts(documentSnapshot.getLong("posts"));
settings.setFollowers(documentSnapshot.getLong("followers"));
settings.setFollowing(documentSnapshot.getLong("following"));
}
});
}
You cannot return something now that hasn't been loaded yet. Firestore loads data asynchronously, since it may take some time for this. Depending on your connection speed and the state, it may take from a few hundred milliseconds to a few seconds before that data is available. If you want to pass settings object to another method, just call that method inside onSuccess() method and pass that object as an argument. So a quick fix would be this:
#Override
public void onSuccess(DocumentSnapshot documentSnapshot) {
UserAccountSettings settings = documentSnapshot.toObject(UserAccountSettings.class);
yourMethod(settings);
}
One more thing to mention is that you don't need to set the those values to object that already have them. You are already getting the data from the database as an object.
So remember, onSuccess() method has an asynchronous behaviour, which means that is called even before you are getting the data from your database. If you want to use the settings object outside that method, you need to create your own callback. To achieve this, first you need to create an interface like this:
public interface MyCallback {
void onCallback(UserAccountSettings settings);
}
Then you need to create a method that is actually getting the data from the database. This method should look like this:
public void readData(MyCallback myCallback) {
DocumentReference mSettings = mFirebaseFirestore.collection("user_account_settings").document(userID);
mSettings.get().addOnSuccessListener(new OnSuccessListener<DocumentSnapshot>() {
#Override
public void onSuccess(DocumentSnapshot documentSnapshot) {
UserAccountSettings settings = documentSnapshot.toObject(UserAccountSettings.class);
myCallback.onCallback(settings);
}
});
}
In the end just simply call readData() method and pass an instance of the MyCallback interface as an argument wherever you need it like this:
readData(new MyCallback() {
#Override
public void onCallback(UserAccountSettings settings) {
Log.d("TAG", settings.getDisplay_name());
}
});
This is the only way in which you can use that object of UserAccountSettings class outside onSuccess() method. For more informations, you can take also a look at this video.
Use LiveData as return type and observe the changes of it's value to execute desired operation.
private MutableLiveData<UserAccountSettings> userSettingsMutableLiveData = new MutableLiveData<>();
public MutableLiveData<UserAccountSettings> getUserSettings(DocumentSnapshot documentSnapshot){
DocumentReference mSettings = mFirebaseFirestore.collection("user_account_settings").document(userID);
mSettings.get().addOnSuccessListener(new OnSuccessListener<DocumentSnapshot>() {
#Override
public void onSuccess(DocumentSnapshot documentSnapshot) {
UserAccountSettings settings = documentSnapshot.toObject(UserAccountSettings.class);
settings.setDisplay_name(documentSnapshot.getString("display_name"));
settings.setUsername(documentSnapshot.getString("username"));
settings.setWebsite(documentSnapshot.getString("website"));
settings.setProfile_photo(documentSnapshot.getString("profile_photo"));
settings.setPosts(documentSnapshot.getLong("posts"));
settings.setFollowers(documentSnapshot.getLong("followers"));
settings.setFollowing(documentSnapshot.getLong("following"));
userSettingsMutableLiveData.setValue(settings);
}
});
return userSettingsMutableLiveData;
}
Then from your Activity/Fragment observe the LiveData and inside onChanged do your desired operation.
getUserSettings().observe(this, new Observer<UserAccountSettings>() {
#Override
public void onChanged(UserAccountSettings userAccountSettings) {
//here, do whatever you want on `userAccountSettings`
}
});

Which design pattern to use for using different subclasses based on input [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 months ago.
Improve this question
There is an interface called Processor, which has two implementations SimpleProcessor and ComplexProcessor.
Now I have a process, which consumes an input, and then using that input decides whether it should use SimpleProcessor or ComplexProcessor.
Current solution : I was thinking to use Abstract Factory, which will generate the instance on the basis of the input.
But the issue is that I don't want new instances. I want to use already instantiated objects. That is, I want to re-use the instances.
That means, Abstract factory is absolutely the wrong pattern to use here, as it is for generating objects on the basis of type.
Another thing, that our team normally does is to create a map from input to the corresponding processor instance. And at runtime, we can use that map to get the correct instance on the basis of input.
This feels like a adhoc solution.
I want this to be extendable : new input types can be mapped to new processor types.
Is there some standard way to solve this?
You can use a variation of the Chain of Responsibility pattern.
It will scale far better than using a Map (or hash table in general).
This variation will support dependency injection and is very easy to extend (without breaking any code or violating the Open-Closed principle).
Opposed to the classic version, handlers do not need to be explicitly chained. The classic version scales very bad.
The pattern uses polymorphism to enable extensibility and is therefore targeting an object oriented language.
The pattern is as follows:
The client API is a container class, that manages a collection of input handlers (for example SimnpleProcessor and ComplexProcessor).
Each handler is only known to the container by a common interface and unknown to the client.
The collection of handlers is passed to the container via the constructor (to enable optional dependency injection).
The container accepts the predicate (input) and passes it on to the anonymous handlers by iterating over the handler collection.
Each handler now decides based on the input if it can handle it (return true) or not (return false).
If a handler returns true (to signal that the input was successfully handled), the container will break further input processing by other handlers (alternatively, use a different criteria e.g., to allow multiple handlers to handle the input).
In the following very basic example implementation, the order of handler execution is simply defined by their position in their container (collection).
If this isn't sufficient, you can simply implement a priority algorithm.
Implementation (C#)
Below is the container. It manages the individual handler implementation using polymorphism. Since handler implementation are only known by their common interface, the container scales extremely well: simply add/inject an additional handler implementation.
The container is actually used directly by the client (whereas the handlers are hidden from the client, while anonymous to the container).
interface IInputProcessor
{
void Process(object input);
}
class InputProcessor : IInputProcessor
{
private IEnumerable<IInputHandler> InputHandlers { get; }
// Constructor.
// Optionally use an IoC container to inject the dependency (a collection of input handlers).
public InputProcessor(IEnumerable<IInputHandler> inputHandlers)
{
this.InputHandlers = inputHandlers;
}
// Method to handle the input.
// The input is then delegated to the input handlers.
public void Process(object input)
{
foreach (IInputHandler inputHandler in this.InputHandlers)
{
if (inputHandler.TryHandle(input))
{
return;
}
}
}
}
Below are the input handlers.
To add new handlers i.e. to extend input handling, simply implement the IInputHandler interface and add it to a collection which is passed/injected to the container (IInputProcessor):
interface IInputHandler
{
bool TryHandle(object input);
}
class SimpleProcessor : IInputHandler
{
public bool TryHandle(object input)
{
if (input == 1)
{
//TODO::Handle input
return true;
}
return false;
}
}
class ComplexProcessor : IInputHandler
{
public bool TryHandle(object input)
{
if (input == 3)
{
//TODO::Handle input
return true;
}
return false;
}
}
Usage Example
public class Program
{
public static void Main()
{
/* Setup Chain of Responsibility.
/* Preferably configure an IoC container. */
var inputHandlers = new List<IInputHandlers>
{
new SimpleProcessor(),
new ComplexProcessor()
};
IInputProcessor inputProcessor = new InputProcessor(inputHandlers);
/* Use the handler chain */
int input = 3;
inputProcessor.Pocess(input); // Will execute the ComplexProcessor
input = 1;
inputProcessor.Pocess(input); // Will execute the SimpleProcessor
}
}
It is possible to use Strategy pattern with combination of Factory pattern. Factory objects can be cached to have reusable objects without recreating them when objects are necessary.
As an alternative to caching, it is possible to use singleton pattern. In ASP.NET Core it is pretty simple. And if you have DI container, just make sure that you've set settings of creation instance to singleton
Let's start with the first example. We need some enum of ProcessorType:
public enum ProcessorType
{
Simple, Complex
}
Then this is our abstraction of processors:
public interface IProcessor
{
DateTime DateCreated { get; }
}
And its concrete implemetations:
public class SimpleProcessor : IProcessor
{
public DateTime DateCreated { get; } = DateTime.Now;
}
public class ComplexProcessor : IProcessor
{
public DateTime DateCreated { get; } = DateTime.Now;
}
Then we need a factory with cached values:
public class ProcessorFactory
{
private static readonly IDictionary<ProcessorType, IProcessor> _cache
= new Dictionary<ProcessorType, IProcessor>()
{
{ ProcessorType.Simple, new SimpleProcessor() },
{ ProcessorType.Complex, new ComplexProcessor() }
};
public IProcessor GetInstance(ProcessorType processorType)
{
return _cache[processorType];
}
}
And code can be run like this:
ProcessorFactory processorFactory = new ProcessorFactory();
Thread.Sleep(3000);
var simpleProcessor = processorFactory.GetInstance(ProcessorType.Simple);
Console.WriteLine(simpleProcessor.DateCreated); // OUTPUT: 2022-07-07 8:00:01
ProcessorFactory processorFactory_1 = new ProcessorFactory();
Thread.Sleep(3000);
var complexProcessor = processorFactory_1.GetInstance(ProcessorType.Complex);
Console.WriteLine(complexProcessor.DateCreated); // OUTPUT: 2022-07-07 8:00:01
The second way
The second way is to use DI container. So we need to modify our factory to get instances from dependency injection container:
public class ProcessorFactoryByDI
{
private readonly IDictionary<ProcessorType, IProcessor> _cache;
public ProcessorFactoryByDI(
SimpleProcessor simpleProcessor,
ComplexProcessor complexProcessor)
{
_cache = new Dictionary<ProcessorType, IProcessor>()
{
{ ProcessorType.Simple, simpleProcessor },
{ ProcessorType.Complex, complexProcessor }
};
}
public IProcessor GetInstance(ProcessorType processorType)
{
return _cache[processorType];
}
}
And if you use ASP.NET Core, then you can declare your objects as singleton like this:
services.AddSingleton<SimpleProcessor>();
services.AddSingleton<ComplexProcessor>();
Read more about lifetime of an object

Can I change my response data in OutputFormatter in ASP.NET Core 3.1

I'm trying to create a simple feature to make the first action act like the second one.
public IActionResult GetMessage()
{
return "message";
}
public IActionResult GetMessageDataModel()
{
return new MessageDataModel("message");
}
First idea came to my mind was to extend SystemTextJsonOutputFormater, and wrap context.Object with my data model in WriteResponseBodyAsync, but the action is marked sealed.
Then I tried to override WriteAsync but context.Object doesn't have protected setter, either.
Is there anyway I can achieve this by manipulating OutputFormatter?
Or I have another option instead of a custom OutputFormatter?
for some reason they prefer every response in a same format like {"return":"some message I write.","code":1}, hence I want this feature to achieve this instead of creating MessageDataModel every time.
Based on your description and requirement, it seems that you'd like to generate unified-format data globally instead of achieving it in each action's code logic. To achieve it, you can try to implement it in action filter, like below.
public class MyCustomFilter : Attribute, IActionFilter
{
public void OnActionExecuted(ActionExecutedContext context)
{
// implement code logic here
// based on your actual scenario
// get original message
// generate new instance of MessageDataModel
//example:
var mes = context.Result as JsonResult;
var model = new MessageDataModel
{
Code = 1,
Return = mes.Value.ToString()
};
context.Result = new JsonResult(model);
}
Apply it on specific action(s)
[MyCustomFilter]
public IActionResult GetMessage()
{
return Json("message");
}

Wrong Thread.CurrentPrincipal in async WCF end-method

I have a WCF service which has its Thread.CurrentPrincipal set in the ServiceConfiguration.ClaimsAuthorizationManager.
When I implement the service asynchronously like this:
public IAsyncResult BeginMethod1(AsyncCallback callback, object state)
{
// Audit log call (uses Thread.CurrentPrincipal)
var task = Task<int>.Factory.StartNew(this.WorkerFunction, state);
return task.ContinueWith(res => callback(task));
}
public string EndMethod1(IAsyncResult ar)
{
// Audit log result (uses Thread.CurrentPrincipal)
return ar.AsyncState as string;
}
private int WorkerFunction(object state)
{
// perform work
}
I find that the Thread.CurrentPrincipal is set to the correct ClaimsPrincipal in the Begin-method and also in the WorkerFunction, but in the End-method it's set to a GenericPrincipal.
I know I can enable ASP.NET compatibility for the service and use HttpContext.Current.User which has the correct principal in all methods, but I'd rather not do this.
Is there a way to force the Thread.CurrentPrincipal to the correct ClaimsPrincipal without turning on ASP.NET compatibility?
Starting with a summary of WCF extension points, you'll see the one that is expressly designed to solve your problem. It is called a CallContextInitializer. Take a look at this article which gives CallContextInitializer sample code.
If you make an ICallContextInitializer extension, you will be given control over both the BeginXXX thread context AND the EndXXX thread context. You are saying that the ClaimsAuthorizationManager has correctly established the user principal in your BeginXXX(...) method. In that case, you then make for yourself a custom ICallContextInitializer which either assigns or records the CurrentPrincipal, depending on whether it is handling your BeginXXX() or your EndXXX(). Something like:
public object BeforeInvoke(System.ServiceModel.InstanceContext instanceContext, System.ServiceModel.IClientChannel channel, System.ServiceModel.Channels.Message request){
object principal = null;
if (request.Properties.TryGetValue("userPrincipal", out principal))
{
//If we got here, it means we're about to call the EndXXX(...) method.
Thread.CurrentPrincipal = (IPrincipal)principal;
}
else
{
//If we got here, it means we're about to call the BeginXXX(...) method.
request.Properties["userPrincipal"] = Thread.CurrentPrincipal;
}
...
}
To clarify further, consider two cases. Suppose you implemented both an ICallContextInitializer and an IParameterInspector. Suppose that these hooks are expected to execute with a synchronous WCF service and with an async WCF service (which is your special case).
Below are the sequence of events and the explanation of what is happening:
Synchronous Case
ICallContextInitializer.BeforeInvoke();
IParemeterInspector.BeforeCall();
//...service executes...
IParameterInspector.AfterCall();
ICallContextInitializer.AfterInvoke();
Nothing surprising in the above code. But now look below at what happens with asynchronous service operations...
Asynchronous Case
ICallContextInitializer.BeforeInvoke(); //TryGetValue() fails, so this records the UserPrincipal.
IParameterInspector.BeforeCall();
//...Your BeginXXX() routine now executes...
ICallContextInitializer.AfterInvoke();
//...Now your Task async code executes (or finishes executing)...
ICallContextInitializercut.BeforeInvoke(); //TryGetValue succeeds, so this assigns the UserPrincipal.
//...Your EndXXX() routine now executes...
IParameterInspector.AfterCall();
ICallContextInitializer.AfterInvoke();
As you can see, the CallContextInitializer ensures you have opportunity to initialize values such as your CurrentPrincipal just before the EndXXX() routine runs. It therefore doesn't matter that the EndXXX() routine assuredly is executing on a different thread than did the BeginXXX() routine. And yes, the System.ServiceModel.Channels.Message object which is storing your user principal between Begin/End methods, is preserved and properly transmitted by WCF even though the thread changed.
Overall, this approach allows your EndXXX(IAsyncresult) to execute with the correct IPrincipal, without having to explicitly re-establish the CurrentPrincipal in the EndXXX() routine. And as with any WCF behavior, you can decide if this applies to individual operations, all operations on a contract, or all operations on an endpoint.
Not really the answer to my question, but an alternate approach of implementing the WCF service (in .NET 4.5) that does not exhibit the same issues with Thread.CurrentPrincipal.
public async Task<string> Method1()
{
// Audit log call (uses Thread.CurrentPrincipal)
try
{
return await Task.Factory.StartNew(() => this.WorkerFunction());
}
finally
{
// Audit log result (uses Thread.CurrentPrincipal)
}
}
private string WorkerFunction()
{
// perform work
return string.Empty;
}
The valid approach to this is to create an extension:
public class SLOperationContext : IExtension<OperationContext>
{
private readonly IDictionary<string, object> items;
private static ReaderWriterLockSlim _instanceLock = new ReaderWriterLockSlim();
private SLOperationContext()
{
items = new Dictionary<string, object>();
}
public IDictionary<string, object> Items
{
get { return items; }
}
public static SLOperationContext Current
{
get
{
SLOperationContext context = OperationContext.Current.Extensions.Find<SLOperationContext>();
if (context == null)
{
_instanceLock.EnterWriteLock();
context = new SLOperationContext();
OperationContext.Current.Extensions.Add(context);
_instanceLock.ExitWriteLock();
}
return context;
}
}
public void Attach(OperationContext owner) { }
public void Detach(OperationContext owner) { }
}
Now this extension is used as a container for objects that you want to persist between thread switching as OperationContext.Current will remain the same.
Now you can use this in BeginMethod1 to save current user:
SLOperationContext.Current.Items["Principal"] = OperationContext.Current.ClaimsPrincipal;
And then in EndMethod1 you can get the user by typing:
ClaimsPrincipal principal = SLOperationContext.Current.Items["Principal"];
EDIT (Another approach):
public IAsyncResult BeginMethod1(AsyncCallback callback, object state)
{
var task = Task.Factory.StartNew(this.WorkerFunction, state);
var ec = ExecutionContext.Capture();
return task.ContinueWith(res =>
ExecutionContext.Run(ec, (_) => callback(task), null));
}

Custom error pages in mvc 4 application, setup with Windows authentication

I have an intranet application setup with windows authentication. Like in most applications, certain parts of the application are accessible to specific roles only. When a user not in desired role would try to access that area, he should be shown a friendly "You do not have permission to view this page" view.
I searched and looked at several resources that guides to extend the Authorize Attribute. I tried that approach, but it simply doesn't work. I still get the IIS error message and the breakpoint in this custom attributes never gets hit. The breakpoint in my extended attibute doen't get hit even when a user in role visits the page. So, I am wondering if I am missing anything ?
This is what I have -
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class AuthorizeRedirect : AuthorizeAttribute
{
private const string IS_AUTHORIZED = "isAuthorized";
public string RedirectUrl = "~Areas/Errors/Http401";
protected override bool AuthorizeCore(HttpContextBase httpContext)
{
bool isAuthorized = base.AuthorizeCore(httpContext);
httpContext.Items.Add(IS_AUTHORIZED, isAuthorized);
return isAuthorized;
}
public override void OnAuthorization(AuthorizationContext filterContext)
{
base.OnAuthorization(filterContext);
var isAuthorized = filterContext.HttpContext.Items[IS_AUTHORIZED] != null ? Convert.ToBoolean(filterContext.HttpContext.Items[IS_AUTHORIZED]) : false;
if(!isAuthorized && filterContext.RequestContext.HttpContext.User.Identity.IsAuthenticated)
{
filterContext.RequestContext.HttpContext.Response.Redirect(RedirectUrl);
}
}
}
CONTROLLER -
[AuthorizeRedirect]
[HttpPost, ValidateInput(true)]
public ActionResult NewPart(PartsViewModel vmodel) {..}
Any ideas?
Thanks
I think you could use custom error pages instead. Use AuthorizeAttribute to restrict access by callers to an action method.
[Authorize (Roles="Editor, Moderator", Users="Ann, Gohn")]
public ActionResult RestrictedAction()
{
// action logic
}
Then you could use one of the ways those are proposed by #Marco. I like handle HTTP status code within Application_EndRequest. So, it is possible to solve your problem using by following:
protected void Application_EndRequest()
{
int status = Response.StatusCode;
if (Response.StatusCode == 401)
{
Response.Clear();
var rd = new RouteData();
rd.DataTokens["area"] = "Areas";
rd.Values["controller"] = "Errors";
rd.Values["action"] = "Http401";
IController c = new ErrorsController();
c.Execute(new RequestContext(new HttpContextWrapper(Context), rd));
}
}
To clearly specifiey what happens to an existing response when the HTTP status code is an error, you should use existingResponse attribute of <httpErrors> element in your configuration file. If you want to the error page appears immediately, then use Replace value, in otherwise - PassThrough (see details in my issue).