We are working with the Microsoft Distrbuted Cache implementation for .NET core. See https://learn.microsoft.com/en-us/aspnet/core/performance/caching/distributed?view=aspnetcore-2.1 for more information.
Now we can get an key by the following code.
var cacheKey = "application:customer:1234:profile";
var profile = _distributedCache.GetString(cacheKey);
What i want to do is tho do the following:
var cacheKey = "application:customer:1234:*";
var customerData = _distributedCache.GetString(cacheKey);
So that we can get the following keys with this pattern:
application:customer:1234:Profile
application:customer:1234:Orders
application:customer:1234:Invoices
application:customer:1234:Payments
Could not get this work with any wildcard or without an wild card. Is there an solution without implementing another Redis nuget package?
This isn't supported via the IDistributeCache interface. It's designed to get/set a specific key, not return a range of keys. If you need to do something like this, you'll need to drop down into the underlying store, i.e. Redis. The good news is that you don't need anything additional: the same StackExchange.Redis library that is needed to support the Redis IDistributedCache implementation also provides a client you can utilize directly.
In particular to your scenario here, you'd need some code like:
var server = _redis.GetServer(someServer);
foreach(var key in server.Keys(pattern: cacheKey)) {
// do something
}
Here, _redis is an instance of ConnectionMultiplexer. This should already be registered in your service collection since it's utilized by the Redis IDistributedCache implementation. As a result, you can inject it into the controller or other class where this code exists.
The someServer variable is a reference to one of your Redis servers. You can get all registered Redis servers via _redis.GetEndpoints(). That will return an IEnumerable of servers, which you can either pick from or enumerate over. Additionally, you can simply connect directly to a particular server via passing the host string and port:
var server = _redis.GetServer("localhost", 6379);
Be advised, though, that Keys() will result in either a SCAN or KEYS command being issued at the Redis server. Which is used depends on the server version, but either is fairly inefficient, as the entire keyspace must be looked at. It is recommended that you do not use this in production, or if you must, that you issue it on a slave server.
With your question technically answered, given the complexity and the inherent inefficiency of SCAN/KEYS, you'd be better served just doing something like:
var cacheKeyPrefix = "application:customer:1234";
var profile = _distributedCache.GetString($"{cacheKeyPrefix}:Profile");
var orders = _distributedCache.GetString($"{cacheKeyPrefix}:Orders");
var invoices = _distributedCache.GetString($"{cacheKeyPrefix}:Invoices");
var payments = _distributedCache.GetString($"{cacheKeyPrefix}:Payments");
That's going to end up being much quicker and doesn't require anything special.
I know question is a bit old but based on this answear: How to get all keys data from redis cache
This is example solution:
in CustomerRepository.cs
using Newtonsoft.Json;
using StackExchange.Redis;
// ...
public class CustomerRepository : ICustomerRepository
{
private readonly IDistributedCache _redis;
private readonly IConfiguration _configuration;
public CustomerRepository(IDistributedCache redis, IConfiguration configuration)
{
_redis = redis;
_configuration = configuration;
}
///<summary>replace `object` with `class name`</summary>
public async Task<object> GetCustomersAsync(string name)
{
ConfigurationOptions options = ConfigurationOptions.Parse(_configuration.GetConnectionString("DefaultConnection"));
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(options);
IDatabase db = connection.GetDatabase();
EndPoint endPoint = connection.GetEndPoints().First();
var pattern = $"application:customer:{name}:*";
RedisKey[] keys = connection.GetServer(endPoint).Keys(pattern: pattern).ToArray();
var server = connection.GetServer(endPoint);
var result = await _redis.GetStringAsync(key);
return JsonConvert.DeserializeObject<object>(result);
}
}
in appsettings.json
{
"ConnectionStrings": {
"DefaultConnection": "localhost:6379,password=YOUR_PASSWORD_HERE"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
}
}
Related
I have ASP.NET Core web application setup with SignalR scaled-out with Redis.
Using the built-in groups works fine:
Clients.Group("Group_Name");
and survives multiple load-balancers. I'm assuming that SignalR persists those groups in Redis automatically so all servers know what groups we have and who are subscribed to them.
However, in my situation, I can't just rely on Groups (or Users), as there is no way to map the connectionId (Say when overloading OnDisconnectedAsync and only the connection id is known) back to its group, and you always need the Group_Name to identify the group. I need that to identify which part of the group is online, so when OnDisconnectedAsync is called, I know which group this guy belongs to, and on which side of the conversation he is.
I've done some research, and they all suggested (including Microsoft Docs) to use something like:
static readonly ConcurrentDictionary<string, ConversationInformation> connectionMaps;
in the hub itself.
Now, this is a great solution (and thread-safe), except that it exists only on one of the load-balancer server's memory, and the other servers have a different instance of this dictionary.
The question is, do I have to persist connectionMaps manually? Using Redis for example?
Something like:
public class ChatHub : Hub
{
static readonly ConcurrentDictionary<string, ConversationInformation> connectionMaps;
ChatHub(IDistributedCache distributedCache)
{
connectionMaps = distributedCache.Get("ConnectionMaps");
/// I think connectionMaps should not be static any more.
}
}
and if yes, is it thread-safe? if no, can you suggest a better solution that works with Load-Balancing?
Have been battling with the same issue on this end. What I've come up with is to persist the collections within the redis cache while utilising a StackExchange.Redis.IDatabaseAsync alongside locks to handle concurrency.
This unfortunately makes the entire process sync but couldn't quite figure a way around this.
Here's the core of what I'm doing, this attains a lock and return back a deserialised collection from the cache
private async Task<ConcurrentDictionary<int, HubMedia>> GetMediaAttributes(bool requireLock)
{
if(requireLock)
{
var retryTime = 0;
try
{
while (!await _redisDatabase.LockTakeAsync(_mediaAttributesLock, _lockValue, _defaultLockDuration))
{
//wait till we can get a lock on the data, 100ms by default
await Task.Delay(100);
retryTime += 10;
if (retryTime > _defaultLockDuration.TotalMilliseconds)
{
_logger.LogError("Failed to get Media Attributes");
return null;
}
}
}
catch(TaskCanceledException e)
{
_logger.LogError("Failed to take lock within the default 5 second wait time " + e);
return null;
}
}
var mediaAttributes = await _redisDatabase.StringGetAsync(MEDIA_ATTRIBUTES_LIST);
if (!mediaAttributes.HasValue)
{
return new ConcurrentDictionary<int, HubMedia>();
}
return JsonConvert.DeserializeObject<ConcurrentDictionary<int, HubMedia>>(mediaAttributes);
}
Updating the collection like so after I've done manipulating it
private async Task<bool> UpdateCollection(string redisCollectionKey, object collection, string lockKey)
{
var success = false;
try
{
success = await _redisDatabase.StringSetAsync(redisCollectionKey, JsonConvert.SerializeObject(collection, new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
}));
}
finally
{
await _redisDatabase.LockReleaseAsync(lockKey, _lockValue);
}
return success;
}
and when I'm done I just ensure the lock is released for other instances to grab and use
private async Task ReleaseLock(string lockKey)
{
await _redisDatabase.LockReleaseAsync(lockKey, _lockValue);
}
Would be happy to hear if you find a better way of doing this. Struggled to find any documentation on scale out with data retention and sharing.
I've been building an asp.net core website, using the asp.net boilerplate template. As of now, I've been storing all of the settings in the appsettings.json file. As the application gets bigger, I'm thinking I should start storing some settings via ABP's SettingProvider and ISettingStore.
My question is, does anyone have, or know of, a sample application that show's how to implement ISettingStore and storing the settings in the database?
The only post I could find so far is this, but the link hikalkan supplies is broken.
Thanks for any help,
Joe
ABP stores settings on memory with default values. When you insert a new setting value into database, then it reads from database and overrides the default value. So basically when database has no settings then it means all the settings are on default values. Setting values are stored in AbpSettings table.
To start using settings mechanism. Create your own SettingProvider inherited from SettingProvider. Initialize it in your module (eg:
ModuleZeroSampleProjectApplicationModule).
As SettingProvider is automatically registed to dependency injection; You can inject ISettingManager wherever you want.
public class MySettingProvider : SettingProvider
{
public override IEnumerable<SettingDefinition> GetSettingDefinitions(SettingDefinitionProviderContext context)
{
return new[]
{
new SettingDefinition(
"SmtpServerAddress",
"127.0.0.1"
),
new SettingDefinition(
"PassiveUsersCanNotLogin",
"true",
scopes: SettingScopes.Application | SettingScopes.Tenant
),
new SettingDefinition(
"SiteColorPreference",
"red",
scopes: SettingScopes.User,
isVisibleToClients: true
)
};
}
}
In application services and controllers you don't need to inject ISettingManager
(because there's already property injected) and you can directly use SettingManager property. Forexample :
//Getting a boolean value (async call)
var value1 = await SettingManager.GetSettingValueAsync<bool>("PassiveUsersCanNotLogin");
And for the other classes (like Domain Services) can inject ISettingManager
public class UserEmailer : ITransientDependency
{
private readonly ISettingManager _settingManager;
public UserEmailer(ISettingManager settingManager)
{
_settingManager = settingManager;
}
[UnitOfWork]
public virtual async Task TestMethod()
{
var settingValue = _settingManager.GetSettingValueForUser("SmtpServerAddress", tenantAdmin.TenantId, tenantAdmin.Id);
}
}
Note: To modify a setting you can use these methods in SettingManager ChangeSettingForApplicationAsync, ChangeSettingForTenantAsync and ChangeSettingForUserAsync
I would like to use System.Runtime.Caching.MemoryCache in Web role project which contains WCF Service.
Can anybody please let me know that whether we can use System.Runtime.Caching.MemoryCache in Cloud web role project?
If yes please let me know Memory and other constraints.
Yes you can.
You should add the reference to System.Runtime.Caching to the Web Role project, then use something like the code below (it is doing almost nothing and is not a best practice, for sure).
Just tried it with ASP.NET MVC in the Cloud Web Role with the Azure Emulator and it works.
Regarding limits - there are two CacheMemoryLimit and PhysicalMemoryLimit properties you can use for retrieve the needed values. It shows the limit in bytes. I do not know if there are any limits beyond these in terms of in-memory cache in Azure Cloud Services.
private static object _lock = new Object();
private static MemoryCache _cache = new MemoryCache("ThisIsMyCache");
public static object GetItem(string key)
{
lock (_lock)
{
var item = _cache.Get(key);
if (item == null)
{
item = InitiaizeItem(key);
_cache.Set(key, item, new CacheItemPolicy());
}
return item;
}
}
private static object InitiaizeItem(string key)
{
return new { Value = key };
}
I am trying to learn more about ASP.NET 5 and new .NET Core and trying to figure out if there is a built-in memory cache.
I have found out about Microsoft.Framework.Caching.Memory.MemoryCache.
However there is very little documentation available.
Any help would be appreciated.
There are two caching interfaces, IMemoryCache and IDistributedCache. The IDistrbutedCache is intended to be used in cloud hosted scenarios where there is a shared cache, which is shared between multiple instances of the application. Use the IMemoryCache otherwise.
You can add them in your startup by calling the method below:
private static void ConfigureCaching(IServiceCollection services)
{
// Adds a default in-memory implementation of IDistributedCache, which is very fast but
// the cache will not be shared between instances of the application.
// Also adds IMemoryCache.
services.AddCaching();
// Uncomment the following line to use the Redis implementation of
// IDistributedCache. This will override any previously registered IDistributedCache
// service. Redis is a very fast cache provider and the recommended distributed cache
// provider.
// services.AddTransient<IDistributedCache, RedisCache>();
// Uncomment the following line to use the Microsoft SQL Server implementation of
// IDistributedCache. Note that this would require setting up the session state database.
// Redis is the preferred cache implementation but you can use SQL Server if you don't
// have an alternative.
// services.AddSqlServerCache(o =>
// {
// o.ConnectionString =
// "Server=.;Database=ASPNET5SessionState;Trusted_Connection=True;";
// o.SchemaName = "dbo";
// o.TableName = "Sessions";
// });
}
The IDistributedCache is the one most people will want to use to get the most out of caching but it has a very primitive interface (You can only get/save byte arrays with it) and few extension methods. See this issue for more information.
You can now inject either IDistributedCache or IMemoryCache into your controller or service and use them as normal. Using them is pretty simple, they are a bit like dictionaries after all. Here is an example of the IMemoryCache:
public class MyService : IMyService
{
private readonly IDatabase database;
private readonly IMemoryCache memoryCache;
public MyService(IDatabase database, IMemoryCache memoryCache)
{
this.database = database;
this.memoryCache = memoryCache;
}
public string GetCachedObject()
{
string cachedObject;
if (!this.memoryCache.TryGetValue("Key", out cachedObject))
{
cachedObject = this.database.GetObject();
this.memoryCache.Set(
"Key",
cachedObject,
new MemoryCacheEntryOptions()
{
SlidingExpiration = TimeSpan.FromHours(1)
});
}
return cachedObject;
}
}
Here's a MemoryCache sample: https://github.com/aspnet/Caching/tree/dev/samples/MemoryCacheSample
More samples: https://github.com/aspnet/Caching/tree/dev/samples
I'm running two instances of my application. In one instance, I save one of my entities. When I check the RavenDB (http://localhost:8080/raven), I can see the change. Then, in my other client, I do this (below), but I don't see the changes from the other application. What do I need to do in order to get the most recent data in the DB?
public IEnumerable<CustomVariableGroup> GetAll()
{
return Session
.Query<CustomVariableGroup>()
.Customize(x => x.WaitForNonStaleResults());
}
Edit: The code above works if I try to make a change and get a concurrency exception. After that, when I call refresh (which invokes the above code), it works.
Here is the code that does the save:
public void Save<T>(T objectToSave)
{
Guid eTag = (Guid)Session.Advanced.GetEtagFor(objectToSave);
Session.Store(objectToSave, eTag);
Session.SaveChanges();
}
And here is the class that contains the Database and Session:
public abstract class DataAccessLayerBase
{
/// <summary>
/// Gets the database.
/// </summary>
protected static DocumentStore Database { get; private set; }
/// <summary>
/// Gets the session.
/// </summary>
protected static IDocumentSession Session { get; private set; }
static DataAccessLayerBase()
{
if (Database != null) { return; }
Database = GetDatabase();
Session = GetSession();
}
private static DocumentStore GetDatabase()
{
string databaseUrl = ConfigurationManager.AppSettings["databaseUrl"];
DocumentStore documentStore = new DocumentStore();
try
{
//documentStore.ConnectionStringName = "RavenDb"; // See app.config for why this is commented.
documentStore.Url = databaseUrl;
documentStore.Initialize();
}
catch
{
documentStore.Dispose();
throw;
}
return documentStore;
}
private static IDocumentSession GetSession()
{
IDocumentSession session = Database.OpenSession();
session.Advanced.UseOptimisticConcurrency = true;
return session;
}
}
Lacking more detailed information and some code, I can only guess...
Please make sure that you call .SaveChanges() on your session. Without explicitly specifiying an ITransaction your IDocumentSession will be isolated and transactional between it's opening and the call to .SaveChanges. Either all operations succeed or none. But if you don't call it all your previous .Store calls will be lost.
If I was wrong, please post more details about your code.
EDIT: Second answer (after additional information):
Your problem has to do with the way RavenDB caches on the client-side. RavenDB by default caches every GET request throughout a DocumentSession. Plain queries are just GET queries (and no, it has nothing to do wheter your index in dynamic or manually defined upfront) and therefore they will be cached. The solution in your application is to dispose the session and open a new one.
I suggest you rethink your Session lifecycle. It seems that your sessions live too long, otherwise this concurrency wouldn't be an issue. If you're building a web-application I recommend to open and close the session with the beginning and the end of your request. Have a look at RaccoonBlog to see it implemented elegantly.
Bob,
It looks like you have but a single session in the application, which isn't right. The following article talks about NHibernate, but the session management parts applies to RavenDB as well:
http://archive.msdn.microsoft.com/mag200912NHibernate
This code is meaningless:
Guid eTag = (Guid)Session.Advanced.GetEtagFor(objectToSave);
Session.Store(objectToSave, eTag);
It basically a no op, but one that looks important. You seems to be trying to work with a model where you have to manually manage all the saves, don't do that. You only need to manage things yourself when you create a new item, that is all.
As for the reason you get this problem, here is a sample:
var session = documentStore.OpenSession();
var post1 = session.Load<Post>(1);
// change the post by another client
post2 = session.Load<Post>(1); // will NOT go to the server, will give the same instance as post1
Assert.ReferenceEquals(post1,post2);
Sessions are short lived, and typically used in the scope of a single form / request.