I was trying to do some simple benchmarking (using JMH) of an ignite cache with Write Ahead enabled.
Write-Through is working fine, however write-ahead errors out.
I'm getting CacheWriterException and the connection pool is timing out.
The datasource is the default hikari datasource configured by Spring Boot (this works correctly with Write Through).
Ignite Cache Configuration
CacheConfiguration<Long, Customer> customerWriteAheadCacheCfg = new CacheConfiguration<(CacheName.WRITE_AHEAD.getCacheName());
customerWriteAheadCacheCfg.setIndexedTypes(Long.class, Customer.class);
customerWriteAheadCacheCfg.setWriteThrough(true);
customerWriteAheadCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
customerWriteAheadCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
customerWriteAheadCacheCfg.setWriteBehindEnabled(true);
customerWriteAheadCacheCfg.setWriteBehindBatchSize(50);
Store and Entity Configuration
CacheJdbcPojoStoreFactory<Long, Customer> customerPojoFactory1 = new CacheJdbcPojoStoreFactory<>();
customerPojoFactory1.setDataSourceBean("dataSource");
customerPojoFactory1.setDialect(new BasicJdbcDialect());
JdbcType customerPojoType1 = new JdbcType();
customerPojoType1.setCacheName(CacheName.WRITE_AHEAD.getCacheName());
customerPojoType1.setKeyType(Long.class);
customerPojoType1.setValueType(Customer.class);
customerPojoType1.setDatabaseTable("customer");
customerPojoType1.setKeyFields(new JdbcTypeField(Types.INTEGER, "id", Long.class, "id"));
customerPojoType1.setValueFields(
new JdbcTypeField(Types.VARCHAR, "name", String.class, "name"),
new JdbcTypeField(Types.VARCHAR, "type", String.class, "type")
);
customerPojoFactory1.setTypes(customerPojoType1);
For the benchmark, I'm just using put in the cache
public void saveToCacheWriteAhead(Customer customer) {
waCache.put(customer.getId(), customer);
}
Any idea on what might be causing the error ?
"Connection is not available" could mean that the connection pool is full.
In case of write ahead a connection is opened within each writing thread.
Thus the max number of connections should be at least the number of writing threads.
Related
I have a non-persistent ignite cache that stores the following elements,
Key --- java.lang.String
Value --- Custom class
public class Transaction {
private int counter;
public Transaction(int counter) {
this.counter = counter;
}
public String toString() {
return String.valueOf(counter);
}
}
The below code works fine even though I am trying to put a custom object into Ignite.
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
Ignite ignite = Ignition.start(cfg);
IgniteCache<String, Transaction> cache = ignite.getOrCreateCache("blocked");
cache.put("c1234_t2345_p3456", new Transaction(100);
The below code fails with ClassNotFoundException when I am trying to set a list of objects instead. This code is exactly same as the above, except for the list of objects. Why is it that list of objects fail where-in custom objects stored directly works fine?
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
Ignite ignite = Ignition.start(cfg);
IgniteCache<String, List<Transaction>> cache = ignite.getOrCreateCache("blocked");
cache.put("c1234_t2345_p3456", Arrays.asList(new Transaction(100)));
Storing custom objects in-memory to ignite worked, but trying to store List objects instead caused ClassNotFoundException in server. I was able to solve this by copying the custom class definition to "/ignite_home/bin/libs", but curious to know why the first case worked and second case didn't. Can anyone please help me understand what's happening in this case? Is there any other way to resolve this issue?
Ok, after many trials, I have an observation that evens out the differences between the above 2 scenarios. In dynamic declaration of caches as I have done earlier in code, somehow I am seeing the error from Ignite to keep the custom classes in bin/libs folder. But if I define the cache in the ignite-config.xml, then ignite is somehow able to digest both use-case evenly and doesn't even throw the ClassNotFoundException. So, I take the summary here that pre-declared caches are much safer as I am seeing some different behaviour while using them as dynamic ones from code. So, I changed the cache declaration to declarative model and now the above use-cases are working fine.
Using:
StackExchange.Redis v1.1.608.0
RedLock.net v1.7.4.0
This code always returns false after 250-600ms:
var eps = new [] { new DnsEndPoint("localhost", 6379) };
var lf = new RedisLockFactory(eps);
var resource = "the-thing-we-are-locking-on";
var expiry = TimeSpan.FromSeconds(30);
using (var redisLock = lf.Create(resource, expiry))
{
Response.Write("Lock acquired: " + redisLock.IsAcquired);
}
I'm struggling to work out why, as I'm able to cache things in Redis just fine with StackExchange.Redis connection string localhost,allowAdmin=true.
In the Redis console I can see a client is being connected, but that's as far as it gets.
I've added a firewall rule for port 6379 but nothing changed.
Any ideas on my the lock can never be acquired?
Found the cause of the issue. I'm using MSOpenTech Redis server v3.2.100:
https://github.com/MSOpenTech/redis/releases
Rolling back to v3.0.500 appears to fix the issue. Not ideal, but in testing environment should be OK for now.
I'm not sure, whether it is a bug or I'm doing something wrong. Here is the code:
using StackExchange.Redis;
ConnectionMultiplexer conn = null;
while (conn == null);
{
try
{
conn = ConnectionMultiplexer.Connect("localhost:6379");
}
catch (Exception)
{
conn = null;
Thread.Sleep(TimeSpan.FromSeconds(5));
}
}
var db = conn.GetDatabase();
var transaction = db.CreateTransaction();
var tasks = new List<Task>();
tasks.Add(transaction.HashSetAsync("key", "field", "value"));
if (transaction.Execute())
{
Task.WaitAll(tasks.ToArray());
}
When I run it with started redis (windows version - 2.6, 2.8.17, 2.8.19), everything works fine. If I start redis after few loops of the cycle, either it doesn't jump into if-statement or it jumps and get blocked on WaitAll(). If I try to check values in redis, they are stored.
This situation happens when we start server and forget to start redis. After postpone start of redis it gets stuck. The same problem appears when using batch instead of transaction.
Am I doing connection to multiplexer wrong or is it bug? (I found few that looked similar but I'm not sure)
It was a bug in older versions of StackExchange.Redis - 1.0.481, 1.0.488 (didn't test any older ones). With new version 1.1.553 it works fine (https://github.com/StackExchange/StackExchange.Redis/issues/200).
I am just few hour old to Redis and ServiceStack.Redis and trying to learn it.
Previously i had used ASP.NET cache where i store DataSet to cache and retrieve when required.
I am trying to accomplish same with ServiceStack.Redis but it is raising exception:
An unhandled exception of type 'System.StackOverflowException' occurred in ServiceStack.Text.dll
Here is the code
static void Main(string[] args)
{
var redisClient = new RedisClient("localhost");
DataSet ds = new DataSet();
ds.Tables.Add("table1");
ds.Tables[0].Columns.Add("col1", typeof(string));
DataRow rw = ds.Tables[0].NewRow();
rw[0] = "samtech";
ds.Tables[0].Rows.Add(rw);
//following line raises exception
redisClient.Set<System.Data.DataSet>("my_ds", ds, DateTime.Now.AddSeconds(60));
}
Can someone tell me what i am doing wrong?
Can i store only custom classes to Redis not the DataSet?
DataSet's are extremely poor candidates for serialization which as a result are not supported by any ServiceStack library, use clean POCO models only.
i am trying to setup Redis on appharbor. I have followed their instructions and again i have an issue with the Booksleeve API. Here is the code i am using to make it work initially:
var connectionUri = new Uri(url);
using (var redis = new RedisConnection(connectionUri.Host, connectionUri.Port, password: connectionUri.UserInfo.Split(new[] { ':' }, 2)[1]))
{
redis.Strings.Set(1, "greeting", "welcome to remember your stuff!");
try
{
var task = redis.Strings.GetString(1, "greeting");
redis.Wait(task);
ViewBag.Message = task.Result;
}
catch (Exception)
{
// It throws an exception trying to wait for the task?
}
}
However, the issue is that it sets the string correctly, but when trying to retrieve the same string from the key value store, it throws a timeout exception waiting for the task to eexecute. However, this code works on my local redis server connection.
Am i using the API in a wrong way? or is this something related to Appharbor?
Thanks
Like a SqlConnection, you need to call Open() (otherwise your messages are queued for delivery).
Unlike SqlConnection, you should not fire up a RedisConnection each time you need it - it is intended to be used as a shared, thread-safe, multiplexer - i.e. a single connection is held somewhere and used by lots and lots of unrelated callers. Unless of course you only need to do one thing!