I'm implementing a distributed redis cache in .NET Core to be shared among multiple .NET Core services in a microservice environment. The microservices are on multiple servers behind a load-balancer on IIS. The redis cache will be used to temporarily hold data related to individual users. The redis server will be hosted internally. I already have a cookie used for external authentication (AuthCookie), but I don't use Identity or Session as the user is expected to be logged in. I just send the AuthCookie to my authentication service, it authenticates and I fill my ClaimsPrincipal with the results.
Since my cache will be user specific, I thought it would be a good idea to utilize .Net Session with a redis distributed cache backing. I want to use AuthCookie as the key, but it fails because it doesn't know how to decrypt it, which sucks. Therefore, I have to have a second, .net session specific cookie. The problem is, the session isn't working across servers on my load balancer, even with persisting my keys. Here's a simple version of my set up:
public void ConfigureServices(IServiceCollection services)
{
var host = "127.0.0.1";
var port = "6379";
var conn = $"{host}:{port}";
var redis = ConnectionMultiplexer.Connect(conn);
services.AddDataProtection().PersistKeysToRedis(redis, "DataProtection-Keys");
services.AddDistributedRedisCache(options =>
{
options.Configuration = conn;
options.InstanceName = "master";
});
services.AddSession(options => options.CookieName = "NetSession");
services.AddMvc();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
app.UseSession();
app.UseMvc();
}
public class RedisController : Controller
{
public string Get(string key)
{
return HttpContext.Session.GetString(key);
}
public string Post([FromBody] CacheData data)
{
HttpContext.Session.SetString(data.Key, data.Value);
return $"Added {data.Key} with {data.Value}";
}
}
This works fine on one server, but setting 3 instances of it up on my load balancer won't work. I call the load-balancer, it sets a key on server A, redis will show as masterSOMEGUID, and a long "NetSession" cookie will be returned. I call get on the load-balancer, server B picks it up, "NetSession" is sent, but when trying to find the key it generates a different GUID and can't find it. If I hit server A again it will find it no problem. I can't figure out what I'm doing wrong.
A separate solution I've found is to not use Session and just use IDistributedCache with the AuthCookie that's already on the requests.
public class RedisController : Controller
{
private readonly IDistributedCache _cache;
public RedisController(IDistributedCache cache)
{
_cache = cache
}
public string Get(string key)
{
return _cache.GetString($"{HttpContext.Request.Cookies["NetSession"]}{key}");
}
public string Post([FromBody] CacheData data)
{
_cache.SetString($"{HttpContext.Request.Cookies["NetSession"]}{data.Key}", data.Value);
return $"Added {data.Key} with {data.Value}";
}
}
This obviously creates a lot more entries on the redis store. Is it bad form to do it this way? It should be noted that I won't be storing user details, just user responses and other reusable information.
Related
I have a REST API and an IdentityServer set up. I would like to be able to display items in my client from the API without having to sign in. However, I would also like to protect my API from external clients that don't belong to me. Is it possible to AllowAnonymous but only from my client?
[HttpGet]
[AllowAnonymous]
public List<Item> GetItems()
{
return new List<Item> { "item1", "item2" };
}
Edit w/ Solution
As mentioned by Tore Nestenius, I changed the grant types from Code to CodeAndClientCredentials and added the Authorize attribute to my controller so that only my client can access it.
Controller:
[HttpGet]
[Authorize]
public List<Item> GetItems()
{
return new List<Item> { "item1", "item2" };
Identity Server 4 Config File:
public static IEnumerable<Client> Clients =>
new Client[]
{
new Client
{
ClientId = "postman-api",
AllowedGrantTypes = GrantTypes.CodeAndClientCredentials,
ClientSecrets =
{
new Secret("secret".Sha256())
},
}
};
}
CORS only works for requests from browsers, if a non browser application makes a request, then CORS will not be involved.
if you use [AllowAnonymous], then any client can access that API endpoint. Either you create separate client for the general things, perhaps using the Client Credentials flow, so that the client can authenticate, get its own token without any user involved.
Turns out this is handled by CORS.
public void ConfigureServices(IServiceCollection services)
{
services.AddCors(options =>
{
options.AddDefaultPolicy(
builder => builder
.WithOrigins("yourURL")
.AllowAnyMethod());
})
}
I am newbie in ASP.NET Core, and I have a controller I need to authorise it only on my machine, for the test purposes, however, deny on other...
I have the following config:
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
services.AddMvc().AddJsonOptions(options =>
{
options.SerializerSettings.DateFormatString= "yyyy-MM-ddTHH:mm:ssZ";
});
services.AddAuthentication("Cookie")
.AddScheme<CookieAuthenticationOptions, CookieAuthenticationHandler>("Cookie", null);
services.AddLogging(builder => { builder.AddSerilog(dispose: true); });
And on the test controlled I enabled the [Authorise] attrubute
[Authorize]
public class OrderController : Controller
Is there a way to allow my local machine to be autorised to acces the controller's actions? Something like [Authorize(Allow=localhost)]
You can create an action filter like so:
public class LocalhostAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
var ip = context.HttpContext.Connection.RemoteIpAddress;
if (!IPAddress.IsLoopback(ip)) {
context.Result = new UnauthorizedResult();
return;
}
base.OnActionExecuting(context);
}
}
And then use the tag Localhost:
//[Authorize]
[Localhost]
public class OrderController : Controller
I believe this will work, restricting the access to the machine where it's executed.
This is more whitelisting than authorization. Authorization means checking whether a user has permission to do something. To do that, the user must be identified first, ie authenticated.
The article Client IP Safelist in the docs shows how you can implement IP safelists through middleware, an action filter or a Razor Pages filter.
App-wide Middleware
The middleware option applies to the entire application. The sample code retrieves the request's endpoint IP, checks it against a list of safe IDs and allows the call to proceed only if it comes from a "safe" list. Otherwise it returns a predetermined error code, in this case 401:
public async Task Invoke(HttpContext context)
{
if (context.Request.Method != "GET")
{
var remoteIp = context.Connection.RemoteIpAddress;
_logger.LogDebug("Request from Remote IP address: {RemoteIp}", remoteIp);
string[] ip = _adminSafeList.Split(';');
var bytes = remoteIp.GetAddressBytes();
var badIp = true;
foreach (var address in ip)
{
var testIp = IPAddress.Parse(address);
if(testIp.GetAddressBytes().SequenceEqual(bytes))
{
badIp = false;
break;
}
}
if(badIp)
{
_logger.LogInformation(
"Forbidden Request from Remote IP address: {RemoteIp}", remoteIp);
context.Response.StatusCode = 401;
return;
}
}
await _next.Invoke(context);
}
The article shows registering it before UseMvc() which means the request will be rejected before reaching the MVC middleware :
app.UseMiddleware<AdminSafeListMiddleware>(Configuration["AdminSafeList"]);
app.UseMvc();
This way we don't waste CPU time routing and processing a request that's going to be rejected anyway. The middleware option is a good choice for implementing a blacklist too.
Action Filter
The filtering code is essentially the same, this time defined in a class derived from ActionFilterAttribute. The filter is defined as a scoped service :
services.AddScoped<ClientIpCheckFilter>();
services.AddMvc(options =>
{
options.Filters.Add
(new ClientIpCheckPageFilter
(_loggerFactory, Configuration));
}).SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
In this case the request will reach the MVC infrastructure before it's accepted or rejected.
Razor Pages Filter
The code is once more the same, this time deriving from IPageFilter
I have an AspNetCore (core 2.1) web appl that works fine in any single server environment, but times out after a few seconds in the environment with 2 load-balanced web servers.
Here are my startup.cs and other classes, and a screenshot of my AppSessionState table. I hope someone can point me to the right path. I've spent 2 days on this and can't find anything else that needs settings or what's wrong with what I'm doing.
Some explanation of below code:
As seen, I've followed the steps to configure the app to use Distributed SQL Server caching and have a helper static class HttpSessionService which handles adding/getting values from the Session State. Also, I have a Session-Timeout attribute that I annotate each of my controllers to control the session timeouts. And after a few seconds or clicks in the app, as each controller action makes this call
HttpSessionService.Redirect()
this Redirect() method gets a NULL user session from this line, which causes the app to timeout.
var userSession = GetValues<UserIdentityView>(SessionKeys.User);
I've attached two VS debuggers to both servers and I've noticed that even when all sessions coming to one of the debugger instance (one server) the AspNet Session still returned NULL for the above userSession value.
Again, this ONLY happens on a distributed environment, i.e. if I stop one of the sites on one of the web servers everything works fine.
I have looked and implemented the session state distributed caching with SQLServer as explained (the same) in different pages, here are few.
https://learn.microsoft.com/en-us/aspnet/core/performance/caching/distributed?view=aspnetcore-3.0
https://www.c-sharpcorner.com/article/configure-sql-server-session-state-in-asp-net-core/
And I do see sessions being written to my created AppSessionState table, yet the app continues to timeout in the environment with 2 load-balanced servers.
Startup.cs:
public void ConfigureServices(IServiceCollection services)
{
// Session State distributed cache configuration against SQLServer.
var aspStateConnStr = ConfigurationManager.ConnectionStrings["ASPState"].ConnectionString;
var aspSessionStateSchemaName = _config.GetValue<string>("AppSettings:AspSessionStateSchemaName");
var aspSessionStateTbl = _config.GetValue<string>("AppSettings:AspSessionStateTable");
services.AddDistributedSqlServerCache(options =>
{
options.ConnectionString = aspStateConnStr;
options.SchemaName = aspSessionStateSchemaName;
options.TableName = aspSessionStateTbl;
});
....
services.AddSession(options =>
{
options.IdleTimeout = 1200;
options.Cookie.HttpOnly = true;
options.Cookie.IsEssential = true;
});
services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
...
services.AddMvc().AddJsonOptions(opt => opt.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver());
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, IApplicationLifetime lifetime, IDistributedCache distCache)
{
var distCacheOptions = new DistributedCacheEntryOptions()
.SetSlidingExpiration(TimeSpan.FromMinutes(5));
// Session State distributed cache configuration.
lifetime.ApplicationStarted.Register(() =>
{
var currentTimeUTC = DateTime.UtcNow.ToString();
byte[] encodedCurrentTimeUTC = Encoding.UTF8.GetBytes(currentTimeUTC);
distCache.Set("cachedTimeUTC", encodedCurrentTimeUTC, distCacheOptions);
});
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseDatabaseErrorPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseSession(); // This must be called before the app.UseMvc()
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
HttpSessionService.Configure(app.ApplicationServices.GetRequiredService<IHttpContextAccessor>(), distCache, distCacheOptions);
}
HttpSessionService (helper class):
public class HttpSessionService
{
private static IHttpContextAccessor _httpContextAccessor;
private static IDistributedCache _distributedCache;
private static ISession _session => _httpContextAccessor.HttpContext.Session;
public static void Configure(IHttpContextAccessor httpContextAccessor, IDistributedCache distCache)
{
_httpContextAccessor = httpContextAccessor;
_distributedCache = distCache;
}
public static void SetValues<T>(string key, T value)
{
_session.Set<T>(key, value);
}
public static T GetValues<T>(string key)
{
var sessionValue = _session.Get<T>(key);
return sessionValue == null ? default(T) : sessionValue;
}
public static bool Redirect()
{
var result = false;
var userSession = GetValues<UserIdentityView>(SessionKeys.User);
if (userSession == null || userSession?.IsAuthenticated == false)
{
result = true;
}
return result;
}
}
SessionTimeoutAttribute:
public class SessionTimeoutAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
var redirect = HttpSessionService.Redirect();
if (redirect)
{
context.Result = new RedirectResult("~/Account/SessionTimeOut");
return;
}
base.OnActionExecuting(context);
}
}
MyController
[SessionTimeout]
public class MyController : Controller
{
// Every action in this and any other controller time out and I get redirected by SessionTimeoutAttribute to "~/Account/SessionTimeOut"
}
Sorry for the late reply on this. I've changed my original implementation, by injecting IDistributedCache interface to all of my controllers and using this setting in the Statusup.cs class in ConfigureServices() function.
services.AddDistributedSqlServerCache(options =>
{
options.ConnectionString = aspStateConnStr;
options.SchemaName = aspSessionStateSchemaName;
options.TableName = aspSessionStateTbl;
options.ExpiredItemsDeletionInterval = null;
});
That made it work in a web farm.
As you can see I'm setting the ExpiredItemsDeletionInterval to null to prevent some basic cache entries from clearing out of cache, but with doing so I ran into another problem that when I attempt to get them I still get null back even if the entry is in the database table. So, that's another thing I'm trying to figure out.
It looks like you're capturing the Session value from HttpContext in your static HttpSessionService instance. That value is per-request so it's definitely going to randomly fail if you capture it like that. You need to go through the IHttpContextAccessor every time you want to access an HttpContext value, if you want to get the latest value.
Also, I'd suggest you pass an HttpContext in to your helper methods rather than using IHttpContextAccessor. It has performance implications and should generally only be used if you absolutely can't pass an HttpContext through. The places you show here seem to have an HttpContext available, so I'd recommend using that instead of the accessor.
I have the following solution in order to implement multiple IDistributedCache definitions:
public interface IDBCache : IDistributedCache
{
}
public class DBCacheOptions : RedisCacheOptions { }
public class DBCache : RedisCache, IDBCache
{
public DBCache(IOptions<DBCacheOptions> optionsAccessor) : base(optionsAccessor)
{
}
}
And I have other definitions like the above pointint to different redis instances.
I am registering the cache service at Startup.cs as:
services.Configure<DBCacheOptions>(options => options.Configuration = configuration.GetValue<string>("Cache:DB"));
services.Add(ServiceDescriptor.Singleton<IDBCache, DBCache>());
And then I am wrapping IDBCache as:
public class DBCacheManager
{
private const string DB_CACHE_FORMAT = "DB:{0}";
private const int DB_EXPIRATION_HOURS = 8;
private readonly IDistributedCache _cache;
public DBCacheManager(IDBCache cache)
{
_cache = cache;
}
public Task AddDBItem(string name, string value)
{
return _cache.SetStringAsync(string.Format(DB_CACHE_FORMAT, name), value,
new DistributedCacheEntryOptions { AbsoluteExpirationRelativeToNow = TimeSpan.FromDays(DB_EXPIRATION_HOURS) });
}
}
And when I check for clients connected to redis (info clients command) connected_clients are incrementing without stopping, also when I see the clients list (client list command) I see the large connection list with long ages and idles.
Insights: I am using redis implementation of AWS ElasticCache which has unlimited idle timeout by default but I guess I should not be forcing to close these connections, should I? I suppose my application should be responsible.
This was a bad implementation of dependency injection. IDistributedCache interface does not have implementation of redis INCR command so somewhere in our project we were connecting directly using StackExchange.Redis with a DI wrapper that was creating multiple connection multiplexers and IDatabases.
Bottom line: my bad
I am setting up bearer token authentication in Web API 2, and I don't understand how (or where) the bearer token is being stored server-side. Here is the relevant code:
Startup:
public partial class Startup
{
public static OAuthAuthorizationServerOptions OAuthOptions { get; private set; }
public static Func<UserManager<IdentityUser>> UserManagerFactory { get; set; }
public static string PublicClientId { get; private set; }
static Startup()
{
PublicClientId = "self";
UserManagerFactory = () => new UserManager<IdentityUser>(new UserStore<IdentityUser>());
OAuthOptions = new OAuthAuthorizationServerOptions
{
TokenEndpointPath = new PathString("/Token"),
Provider = new ApplicationOAuthProvider(PublicClientId, UserManagerFactory),
AuthorizeEndpointPath = new PathString("/api/Account/ExternalLogin"),
AccessTokenExpireTimeSpan = TimeSpan.FromDays(14),
AllowInsecureHttp = true
};
}
public void ConfigureAuth(IAppBuilder app)
{
// Enable the application to use a cookie to store information for the signed in user
app.UseCookieAuthentication(new CookieAuthenticationOptions());
// Use a cookie to temporarily store information about a user logging in with a third party login provider
app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);
app.UseOAuthBearerTokens(OAuthOptions);
}
}
WebApiConfig:
public class WebApiConfig
{
public static void ConfigureWebApi()
{
Register(GlobalConfiguration.Configuration);
}
public static void Register(HttpConfiguration http)
{
AuthUtil.ConfigureWebApiToUseOnlyBearerTokenAuthentication(http);
http.Routes.MapHttpRoute("ActionApi", "api/{controller}/{action}", new {action = Actions.Default});
}
}
AuthUtil:
public class AuthUtil
{
public static string Token(string email)
{
var identity = new ClaimsIdentity(Startup.OAuthOptions.AuthenticationType);
identity.AddClaim(new Claim(ClaimTypes.Name, email));
var ticket = new AuthenticationTicket(identity, new AuthenticationProperties());
var currentUtc = new SystemClock().UtcNow;
ticket.Properties.IssuedUtc = currentUtc;
ticket.Properties.ExpiresUtc = currentUtc.Add(TimeSpan.FromMinutes(30));
var token = Startup.OAuthOptions.AccessTokenFormat.Protect(ticket);
return token;
}
public static void ConfigureWebApiToUseOnlyBearerTokenAuthentication(HttpConfiguration http)
{
http.SuppressDefaultHostAuthentication();
http.Filters.Add(new HostAuthenticationFilter(OAuthDefaults.AuthenticationType));
}
}
LoginController:
public class LoginController : ApiController
{
...
public HttpResponseMessage Post([FromBody] LoginJson loginJson)
{
HttpResponseMessage loginResponse;
if (/* is valid login */)
{
var accessToken = AuthUtil.Token(loginJson.email);
loginResponse = /* HTTP response including accessToken */;
}
else
{
loginResponse = /* HTTP response with error */;
}
return loginResponse;
}
}
Using the above code, I'm able to login and store the bearer token client-side in a cookie, and then make calls to controllers marked with [Authorize] and it lets me in.
My questions are:
Where / how is the bearer token being stored server-side? It seems like this is hapenning through one of the OWIN calls but I can't tell where.
Is it possible to persist the bearer tokens to a database server-side so that they can remain in place after a Web API server restart?
If the answer to #2 is no, is there anyway for a client to maintain its bearer token and re-use it even after the Web API goes down and comes back up? While this may be rare in Production, it can happen quite often doing local testing.
They're not stored server side -- they're issued to the client and the client presents them on each call. They're verified because they're signed by the owin host's protection key. In SystemWeb hosting, that protection key is the machineKey setting from web.config.
That's unnecessary, as long as the protection key the owin host uses doesn't change across server restarts.
A client can hold onto a token for as long as the token is valid.
For those who are looking for how to set web.config, here is a sample
<system.web>
<machineKey validation="HMACSHA256" validationKey="64-hex"
decryption="AES" decryptionKey="another-64-hex"/>
</system.web>
You need both validationKey and decriptionkey to make it work.
And here is how to generate keys
https://msdn.microsoft.com/en-us/library/ms998288.aspx
To add to this, the token can be persisted server side using the SessionStore property of of CookieAuthenticationOptions. I wouldn't advocate doing this but it's there if your tokens become excessively large.
This is an IAuthenticationSessionStore so you could implement your own storage medium.
By default the token is not stored by the server. Only your client has it and is sending it through the authorization header to the server.
If you used the default template provided by Visual Studio, in the Startup ConfigureAuth method the following IAppBuilder extension is called: app.UseOAuthBearerTokens(OAuthOptions).