Apache Ignite performance problem on Azure Kubernetes Service - ignite

I'm using Apache Ignite on Azure Kubernetes as a distributed cache.
Also, I have a web API on Azure based on .NET6
The Ignite service works stable and very well on AKS.
But at first request, the API tries to connect Ignite and it takes around 3 seconds. After that, Ignite responses take around 100 ms which is great. Here are my Web API performance outputs for the GetProduct function.
At first, I've tried adding the Ignite Service to Singleton but it failed sometimes as 'connection closed'. How can I keep open the Ignite connection always? or does anyone has something better idea?
here is my latest GetProduct code,
[HttpGet("getProduct")]
public IActionResult GetProduct(string barcode)
{
Stopwatch _stopWatch = new Stopwatch();
_stopWatch.Start();
Product product;
CacheManager cacheManager = new CacheManager();
cacheManager.ProductCache.TryGet(barcode, out product);
if(product == null)
{
return NotFound(new ApiResponse<Product>(product));
}
cacheManager.DisposeIgnite();
_logger.LogWarning("Loaded in " + _stopWatch.ElapsedMilliseconds + " ms...");
return Ok(new ApiResponse<Product>(product));
}
Also, I add CacheManager class here;
public CacheManager()
{
ConnectIgnite();
InitializeCaches();
}
public void ConnectIgnite()
{
_ignite = Ignition.StartClient(GetIgniteConfiguration());
}
public IgniteClientConfiguration GetIgniteConfiguration()
{
var appSettingsJson = AppSettingsJson.GetAppSettings();
var igniteEndpoints = appSettingsJson["AppSettings:IgniteEndpoint"];
var igniteUser = appSettingsJson["AppSettings:IgniteUser"];
var ignitePassword = appSettingsJson["AppSettings:IgnitePassword"];
var nodeList = igniteEndpoints.Split(",");
var config = new IgniteClientConfiguration
{
Endpoints = nodeList,
UserName = igniteUser,
Password = ignitePassword,
EnablePartitionAwareness = true,
SocketTimeout = TimeSpan.FromMilliseconds(System.Threading.Timeout.Infinite)
};
return config;
}

Make it a singleton. Ignite node, even in client mode, is supposed to be running for the entire lifetime of your application. All Ignite APIs are thread-safe. If you get a connection error, please provide more details (exception stack trace, how do you create the singleton, etc).
You can also try the Ignite thin client which consumes fewer resources and connects instantly: https://ignite.apache.org/docs/latest/thin-clients/dotnet-thin-client.

Related

Register Hibernate 5 Event Listeners

I am working on a legacy non-Spring application, and it is being migrated from Hibernate 3 to Hibernate 5.6.0.Final (latest at this time). I have generally never used Hibernate Event Listeners in my work, so this is quite new to me, and I am studying these in Hibernate 5.
Currently in some test class we have defined the code this way for Hibernate 3:
protected static Configuration createSecuredDatabaseConfig() {
Configuration config = createUnrestrictedDatabaseConfig();
config.setListener("pre-insert", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-update", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-delete", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-load", "com.app.server.services.db.eventlisteners.EkoSecurityHibernateEventListener");
return config;
}
This is obviously no longer valid, and I believe I need to create a Hibernate Integrator, which I have done.
public class MyEventListenerIntegrator implements Integrator {
#Override
public void integrate(Metadata metadata, SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
EventListenerRegistry eventListenerRegistry = serviceRegistry.getService(EventListenerRegistry.class);
eventListenerRegistry.getEventListenerGroup(EventType.PRE_INSERT).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_UPDATE).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_DELETE).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_LOAD).appendListener(new MySecurityHibernateEventListener());
}
So, now I believe the next step is to add this to the session via the registry builder. I am using this website to help me:
https://www.boraji.com/hibernate-5-event-listener-example
Because we were using older Hibernate 3, we had code to create our session factory as follows:
protected static SessionFactory buildSessionFactory(Database db)
{
if (db == null) {
throw new NullPointerException("Database specifier cannot be null");
}
try {
Configuration config = createSessionFactoryConfiguration(db);
String url = config.getProperty("connection.url");
String user = config.getProperty("connection.username");
String password = config.getProperty("connection.password");
try {
String dbDriver = config.getProperty("hibernate.connection.driver_class");
Class.forName(dbDriver);
Connection conn = DriverManager.getConnection(url, user, password);
}
catch (SQLException error) {
logger.info("Didn't find driver, on QA or production, so it's okay to assume we have DB connection");
error.printStackTrace();
}
SessionFactory sessionFactory = config.buildSessionFactory();
sessionFactoryConfigs.put(sessionFactory, config); // Cannot recover config from factory instance, must be stored.
return sessionFactory;
}
catch (Throwable ex) {
// Make sure you log the exception, as it might be swallowed
logger.error("Initial SessionFactory creation failed.", ex);
throw new ExceptionInInitializerError(ex);
}
}
The link that I referred to above has a much different way of creating the sessionfactory. So, I'll be testing that out to see if it works in our app.
Without Spring handling our sessions and transactions, in this app it is coded by hand the way it was done before Spring, and I haven't seen that kind of code in years.
I solved this issue with the help from the link I provided above. However, I didn't copy exactly what they did, but some of it helped. My solution is as follows:
protected static SessionFactory createSecuredDatabaseConfig() {
Configuration config = createUnrestrictedDatabaseConfig();
BootstrapServiceRegistry bootstrapRegistry =
new BootstrapServiceRegistryBuilder()
.applyIntegrator(new EkoEventListenerIntegrator())
.build();
ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder(bootstrapRegistry).applySettings(config.getProperties()).build();
SessionFactory sessionFactory = config.buildSessionFactory(serviceRegistry);
return sessionFactory;
}
This was it. I tried multiple different ways to register the events without the BootstrapServiceRegistry, but none of those worked. I did have to create the integrator. What I did NOT include was the following:
MetadataSources sources = new MetadataSources(serviceRegistry )
.addPackage("com.myproject.server.model");
Metadata metadata = sources.getMetadataBuilder().build();
// did not create the sessionFactory this way
sessionFactory = metadata.getSessionFactoryBuilder().build();
If I had gone further and use this method to create the sessionFactory, then all of my queries would have been complaining about not being able to find the parameterName, which is something else.
The Hibernate Integrator and this method to create the sessionFactory is all for the unit tests. Without registering these events, one unit test would fail, and now it doesn't. So, this solves my problem for now.

How to span a ConcurrentDictionary across load-balancer servers when using SignalR hub with Redis

I have ASP.NET Core web application setup with SignalR scaled-out with Redis.
Using the built-in groups works fine:
Clients.Group("Group_Name");
and survives multiple load-balancers. I'm assuming that SignalR persists those groups in Redis automatically so all servers know what groups we have and who are subscribed to them.
However, in my situation, I can't just rely on Groups (or Users), as there is no way to map the connectionId (Say when overloading OnDisconnectedAsync and only the connection id is known) back to its group, and you always need the Group_Name to identify the group. I need that to identify which part of the group is online, so when OnDisconnectedAsync is called, I know which group this guy belongs to, and on which side of the conversation he is.
I've done some research, and they all suggested (including Microsoft Docs) to use something like:
static readonly ConcurrentDictionary<string, ConversationInformation> connectionMaps;
in the hub itself.
Now, this is a great solution (and thread-safe), except that it exists only on one of the load-balancer server's memory, and the other servers have a different instance of this dictionary.
The question is, do I have to persist connectionMaps manually? Using Redis for example?
Something like:
public class ChatHub : Hub
{
static readonly ConcurrentDictionary<string, ConversationInformation> connectionMaps;
ChatHub(IDistributedCache distributedCache)
{
connectionMaps = distributedCache.Get("ConnectionMaps");
/// I think connectionMaps should not be static any more.
}
}
and if yes, is it thread-safe? if no, can you suggest a better solution that works with Load-Balancing?
Have been battling with the same issue on this end. What I've come up with is to persist the collections within the redis cache while utilising a StackExchange.Redis.IDatabaseAsync alongside locks to handle concurrency.
This unfortunately makes the entire process sync but couldn't quite figure a way around this.
Here's the core of what I'm doing, this attains a lock and return back a deserialised collection from the cache
private async Task<ConcurrentDictionary<int, HubMedia>> GetMediaAttributes(bool requireLock)
{
if(requireLock)
{
var retryTime = 0;
try
{
while (!await _redisDatabase.LockTakeAsync(_mediaAttributesLock, _lockValue, _defaultLockDuration))
{
//wait till we can get a lock on the data, 100ms by default
await Task.Delay(100);
retryTime += 10;
if (retryTime > _defaultLockDuration.TotalMilliseconds)
{
_logger.LogError("Failed to get Media Attributes");
return null;
}
}
}
catch(TaskCanceledException e)
{
_logger.LogError("Failed to take lock within the default 5 second wait time " + e);
return null;
}
}
var mediaAttributes = await _redisDatabase.StringGetAsync(MEDIA_ATTRIBUTES_LIST);
if (!mediaAttributes.HasValue)
{
return new ConcurrentDictionary<int, HubMedia>();
}
return JsonConvert.DeserializeObject<ConcurrentDictionary<int, HubMedia>>(mediaAttributes);
}
Updating the collection like so after I've done manipulating it
private async Task<bool> UpdateCollection(string redisCollectionKey, object collection, string lockKey)
{
var success = false;
try
{
success = await _redisDatabase.StringSetAsync(redisCollectionKey, JsonConvert.SerializeObject(collection, new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
}));
}
finally
{
await _redisDatabase.LockReleaseAsync(lockKey, _lockValue);
}
return success;
}
and when I'm done I just ensure the lock is released for other instances to grab and use
private async Task ReleaseLock(string lockKey)
{
await _redisDatabase.LockReleaseAsync(lockKey, _lockValue);
}
Would be happy to hear if you find a better way of doing this. Struggled to find any documentation on scale out with data retention and sharing.

Microsoft Distrubted Redis Cache - Getting keys based on pattern

We are working with the Microsoft Distrbuted Cache implementation for .NET core. See https://learn.microsoft.com/en-us/aspnet/core/performance/caching/distributed?view=aspnetcore-2.1 for more information.
Now we can get an key by the following code.
var cacheKey = "application:customer:1234:profile";
var profile = _distributedCache.GetString(cacheKey);
What i want to do is tho do the following:
var cacheKey = "application:customer:1234:*";
var customerData = _distributedCache.GetString(cacheKey);
So that we can get the following keys with this pattern:
application:customer:1234:Profile
application:customer:1234:Orders
application:customer:1234:Invoices
application:customer:1234:Payments
Could not get this work with any wildcard or without an wild card. Is there an solution without implementing another Redis nuget package?
This isn't supported via the IDistributeCache interface. It's designed to get/set a specific key, not return a range of keys. If you need to do something like this, you'll need to drop down into the underlying store, i.e. Redis. The good news is that you don't need anything additional: the same StackExchange.Redis library that is needed to support the Redis IDistributedCache implementation also provides a client you can utilize directly.
In particular to your scenario here, you'd need some code like:
var server = _redis.GetServer(someServer);
foreach(var key in server.Keys(pattern: cacheKey)) {
// do something
}
Here, _redis is an instance of ConnectionMultiplexer. This should already be registered in your service collection since it's utilized by the Redis IDistributedCache implementation. As a result, you can inject it into the controller or other class where this code exists.
The someServer variable is a reference to one of your Redis servers. You can get all registered Redis servers via _redis.GetEndpoints(). That will return an IEnumerable of servers, which you can either pick from or enumerate over. Additionally, you can simply connect directly to a particular server via passing the host string and port:
var server = _redis.GetServer("localhost", 6379);
Be advised, though, that Keys() will result in either a SCAN or KEYS command being issued at the Redis server. Which is used depends on the server version, but either is fairly inefficient, as the entire keyspace must be looked at. It is recommended that you do not use this in production, or if you must, that you issue it on a slave server.
With your question technically answered, given the complexity and the inherent inefficiency of SCAN/KEYS, you'd be better served just doing something like:
var cacheKeyPrefix = "application:customer:1234";
var profile = _distributedCache.GetString($"{cacheKeyPrefix}:Profile");
var orders = _distributedCache.GetString($"{cacheKeyPrefix}:Orders");
var invoices = _distributedCache.GetString($"{cacheKeyPrefix}:Invoices");
var payments = _distributedCache.GetString($"{cacheKeyPrefix}:Payments");
That's going to end up being much quicker and doesn't require anything special.
I know question is a bit old but based on this answear: How to get all keys data from redis cache
This is example solution:
in CustomerRepository.cs
using Newtonsoft.Json;
using StackExchange.Redis;
// ...
public class CustomerRepository : ICustomerRepository
{
private readonly IDistributedCache _redis;
private readonly IConfiguration _configuration;
public CustomerRepository(IDistributedCache redis, IConfiguration configuration)
{
_redis = redis;
_configuration = configuration;
}
///<summary>replace `object` with `class name`</summary>
public async Task<object> GetCustomersAsync(string name)
{
ConfigurationOptions options = ConfigurationOptions.Parse(_configuration.GetConnectionString("DefaultConnection"));
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(options);
IDatabase db = connection.GetDatabase();
EndPoint endPoint = connection.GetEndPoints().First();
var pattern = $"application:customer:{name}:*";
RedisKey[] keys = connection.GetServer(endPoint).Keys(pattern: pattern).ToArray();
var server = connection.GetServer(endPoint);
var result = await _redis.GetStringAsync(key);
return JsonConvert.DeserializeObject<object>(result);
}
}
in appsettings.json
{
"ConnectionStrings": {
"DefaultConnection": "localhost:6379,password=YOUR_PASSWORD_HERE"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
}
}

Improving scale out performance for multiple web instances using SignalR Redis Backplane

I have SignalR integrated in our application, and it has been working just fine.
Couple of days ago, due to some requirements, we had to support scale out of our application – and hence we opted for SignalR scale out using Redis.
However, since integration, the SignalR itself has stopped working, and the error we get is : NO TRANSPORT could be initialized successfully. try specifying a different transport or none at all for auto initialization.
Approaches applied :
- Tried with different versions of SignalR, as suggested online - Did not help
- Increased connection timeout – Did not help
Need some help in resolving this. Suggestion on using any other approach is also welcome.
[Update1] Adding code snippets
public class Startup
{
public void Configuration(IAppBuilder app)
{
// Any connection or hub wire up and configuration should go here
GlobalHost.DependencyResolver.UseRedis("server", port, "password", "AppName");
app.MapSignalR();
}
}
For more reference, I followed this link :https://learn.microsoft.com/en-us/aspnet/signalr/overview/performance/scaleout-with-redis
[Update2]
public void Configuration(IAppBuilder app)
{
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(110);
GlobalHost.Configuration.DisconnectTimeout = TimeSpan.FromSeconds(30);
GlobalHost.Configuration.KeepAlive = TimeSpan.FromSeconds(10);
GlobalHost.Configuration.TransportConnectTimeout = TimeSpan.FromSeconds(45);
ConfigureAuth(app);
ConfigureSignalR(app);
// SignalR backplane code changes
string server = RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue(Constant.ConfigKeys.RedisCacheEndpoint) :
ConfigurationManager.AppSettings[Constant.ConfigKeys.RedisCacheEndpoint];
string port = RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue(Constant.ConfigKeys.RedisCachePort) :
ConfigurationManager.AppSettings[Constant.ConfigKeys.RedisCachePort];
string password = RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue(Constant.ConfigKeys.RedisCachePassword) :
ConfigurationManager.AppSettings[Constant.ConfigKeys.RedisCachePassword];
const string SIGNALR_REDIS_APPNAME = "Phoenix 2.0 Admin Tool";
string connectionString = server + ":" + Int32.Parse(port) + ";password=" + password + ",ssl=True,abortConnect=False";
RedisScaleoutConfiguration cfg = new RedisScaleoutConfiguration(connectionString, SIGNALR_REDIS_APPNAME);
GlobalHost.DependencyResolver.UseRedis(cfg);
app.MapSignalR();
}
We have an Azure AppService and are able to use SignalR w/ the Redis backplane. I did observe that things did not work properly depending on the connection string content. We used the RedisScaleoutConfiguration overload of the GlobalHost.DependencyResolver.UseRedis API instead of using the overload that you show.
Here is a block of code based on our working startup (values changed to protect the vulnerable):
const string SIGNALR_REDIS_APPNAME = "OurAppName";
string connectionString = "thename.redis.cache.windows.net:6380;password=somelongsecret,ssl=True,abortConnect=False";
RedisScaleoutConfiguration cfg = new RedisScaleoutConfiguration(connectionString, SIGNALR_REDIS_APPNAME);
GlobalHost.DependencyResolver.UseRedis(cfg);
Obviously you can get an actual connection string from web.config with more code. We also had trouble when specifying a non-default DB name so are using the default here.
Hope this helps.

Akka.Net cluster singleton - handover not occurs when current singleton node shutdown unexpectedly

I'm trying Akka.Net Cluster Tools, in order to use the Singleton behavior and it seems to work perfectly, but just when the current singleton node "host" leaves the cluster in a gracefully way. If I suddenly shutdown the host node, the handover does not occur.
Background
I'm building a system that will be composed by four nodes (initially). One of those nodes will be the "workers coordinator" and it will be responsible to monitor some data from database and, when necessary, submit jobs to the other workers. I was thinking to subscribe to cluster events and use the role leader changing event to make an actor (on the leader node) to become a coordinator, but I think that the Cluster Singleton would be a better choice in this case.
Working sample (but just if I gracefully leave the cluster)
private void Start() {
Console.Title = "Worker";
var section = (AkkaConfigurationSection)ConfigurationManager.GetSection("akka");
var config = section.AkkaConfig;
// Create a new actor system (a container for your actors)
var system = ActorSystem.Create("SingletonActorSystem", config);
var cluster = Cluster.Get(system);
cluster.RegisterOnMemberRemoved(() => MemberRemoved(system));
var settings = new ClusterSingletonManagerSettings("processorCoordinatorInstance",
"worker", TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(1));
var actor = system.ActorOf(ClusterSingletonManager.Props(
singletonProps: Props.Create<ProcessorCoordinatorActor>(),
terminationMessage: PoisonPill.Instance,
settings: settings),
name: "processorCoordinator");
string line = Console.ReadLine();
if (line == "g") { //handover works
cluster.Leave(cluster.SelfAddress);
_leaveClusterEvent.WaitOne();
system.Shutdown();
} else { //doesn't work
system.Shutdown();
}
}
private async void MemberRemoved(ActorSystem actorSystem) {
await actorSystem.Terminate();
_leaveClusterEvent.Set();
}
Configuration
akka {
suppress-json-serializer-warning = on
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
helios.tcp {
port = 0
hostname = localhost
}
}
cluster {
seed-nodes = ["akka.tcp://SingletonActorSystem#127.0.0.1:4053"]
roles = [worker]
}
}
Thank you #Horusiath, your answer is totaly right! I wasn't able to find this configuration in akka.net documentation, and I didn't realize that I was supposed to take a look on the akka documentation. Thank you very much!
Have you tried to set akka.cluster.auto-down-unreachable-after to some timeout (eg. 10 sec)? – Horusiath Aug 12 at 11:27
Posting it as a response for caution for those who find this post.
Using auto-downing is NOT recommended in a clustered environment, due to different part of the system might decide after some time that the other part is down, splitting the cluster into two clusters, each with their own cluster singleton.
Related akka docs: https://doc.akka.io/docs/akka/current/split-brain-resolver.html