is there asynchronous issue in redis - redis

If redis is a single thread server, why the result is not 100000? I think it's not redis's issue, but i want to know the reason. Thanks.
RedisConnection.Default.Database.StringSet("mykey1", 0);
Parallel.For(0, 50000, new ParallelOptions { MaxDegreeOfParallelism = 2 }, (i) =>
{
var number = RedisConnection.Default.Database.StringGet("mykey1");
int result = int.Parse(number);
RedisConnection.Default.Database.StringSet("mykey1", result + 1);
});
Console.WriteLine("Result" + RedisConnection.Default.Database.StringGet("mykey1"));

You could use the MULTI/EXEC to avoid this problem. It will make sure that two command will not be split by other command. It will be a transaction.

Because you make two separate calls to Redis, without the use of a transactional block (MULTI/EXEC) your StringGet and StringSet operations are not atomic. Execution of the threads can interleave Get and Set operations.
Think about one possible execution sequence:
Thread1 Reads myKey1=40
Thread2 Reads myKey1=40
Thread2 Writes myKey with 41
Thread2 Reads myKey1 = 41
Thread2 Writes myKey with 42
Thread1 Writes myKey1 with 41
The proper way to write this code is to to replace the separate call to StringGet and StringSet with a call to StringIncrement:
RedisConnection.Default.Database.StringSet("mykey1", 0);
Parallel.For(0, 50000, new ParallelOptions { MaxDegreeOfParallelism = 2 }, (i) =>
{
RedisConnection.Default.Database. StringIncrement("mykey1");
});
Console.WriteLine("Result" + RedisConnection.Default.Database.StringGet("mykey1"));

Related

SQL connection timeout error with Entity Framework

I am updating the 2600 records in a table at once with entity framework.
It was working previously but now suddenly started throwing the timeout error every time.
The timeout property is set to 150.
Also, multiple users are using the application at the same time.
Below is the code:
foreach (var k in context.Keywords.Where(k => k.CurrentDailyCount > 0))
{
k.CurrentDailyCount = 1;
}
context.SaveChanges();
This is the error I'm facing:
What can be the issue behind this error? It was working fine but suddenly started throwing the timeout error.
var entries = context.Keywords.Where(k => k.CurrentDailyCount > 0) ?? new List<Keyword>();
foreach (var k in entries)
{
k.CurrentDailyCount = 1;
}
await context.SaveChangesAsync();
Store the filtered keywords in a variable to save time it takes to search:
context.Keywords.Where(k => k.CurrentDailyCount > 0)
Ensure
filtered keywords is never null: ?? new List<Keyword>().
Save records Asynchronously: await context.SaveChangesAsync();
At first, you may consider to select only primaryKey field and CurrentDailyCount. you can make it like
context.Keywords.Select(x => new Keyword(){
PrimaryKeyColumn = x.primaryKeyColumn,
CurrentDailyCount = x.currentDailyCount
}).Where(k => k.CurrentDailyCount > 0)
And also you should check the execution time of your sql statement. If CurrentDailyCount column is not indexed, it is no surprise that your Code gets timeout error.
The timeout property is set to 150.
Which timeout are you addressing? is it SQL Connection Timeout or Kestrel Server timeout? If you set SQL timeout period to 150 and the kestrel timeout value is default (Which is 120s) your code interrupts when it reaches to 120 seconds.

SCAN command performance with phpredis

I'm replacing KEYS with SCAN using phpredis.
$redis = new Redis();
$redis->connect('127.0.0.1', 6379);
$redis->setOption(Redis::OPT_SCAN, Redis::SCAN_RETRY);
$it = NULL;
while($arr_keys = $redis->scan($it, "mykey:*", 10000)) {
foreach($arr_keys as $str_key) {
echo "Here is a key: $str_key\n";
}
}
According to redis documentation, I use SCAN to paginate searches to avoid disadvantage of using KEYS.
But in practice, using above code costs me 3 times lower than using just a single $redis->keys()
So I'm wondering if I've done something wrong, or I have to pay speed to avoid KEYS's threat?
Note that I totally have 400K+ keys in my db, and 4 mykey:* keys
A word of caution of using the example:
$it = NULL;
while($arr_keys = $redis->scan($it, "mykey:*", 10000)) {
foreach($arr_keys as $str_key) {
echo "Here is a key: $str_key\n";
}
}
That can return empty array's if none of the 10000 keys scanned matches and then it will give up, and you didn't get all the keys you wanted! I would recommend doing more like this:
$it = null;
do
{
$arr_keys = $redis->scan($it, $key, 10000);
if (is_array($arr_keys) && !empty($arr_keys))
{
foreach ($arr_keys as $str_key)
{
echo "Here is a key: $str_key\n";
}
}
} while ($arr_keys !== false);
And why it takes so long, 400k+ keys, 10000, that's 40 scan requests to redis, if it's not on the local machine, add latency for every 40 redis query to your speed.
Since using keys in production environments is just forbidden because it blocks the entire server while iterating global space keys, then, there's no discussion here about use or not to use keys.
In the other hand, if you want to speed up things, you should go further with Redis: you should index your data.
I doubt that these 400K keys couldn't be categorized in sets or sorted sets, or even hashes, so when you need a particular subset of your 400K-keys-database you could run any scan-equivalent command against a set of 1K items, instead of 400K.
Redis is about indexing data. If not, you're using it like just a simple key-value store.

Entity Framework Transactions and Deadlock

When SaveChanges() is called on the context, all insert/delete/update operations are executed in a single transaction. It is also possible to use DbContextTransaction for transactions. I am trying to simulate deadlock using both of these approaches. When I use DbContextTransaction, I get the deadlock exception right away but SaveChanges() alone does not throw any deadlock exceptions even after an hour. Am I doing something wrong?
Here is the code with DbContextTransaction. I try to update the first row and then the second row in the main thread. I also start another task which tries to update the second row first and then the first row.
while (true)
{
using (var context = new SchoolDBEntities())
{
using (System.Data.Entity.DbContextTransaction dbTran = context.Database.BeginTransaction())
{
Random r = new Random();
int r1 = r.Next();
int r2 = r.Next();
Student std1 = context.Students.First();
std1.StudentName = "test"+r1;
context.SaveChanges();
Student std2 = context.Students.Find(2);
std2.StudentName = "test"+r2;
context.SaveChanges();
dbTran.Commit();
}
}
}
But when I try it with just SaveChanges() it does not generate deadlock:
while (true)
{
using (var context = new SchoolDBEntities())
{
try
{
Random r = new Random();
int r1 = r.Next();
int r2 = r.Next();
Student std1 = context.Students.First();
std1.StudentName = "test" + r1;
Student std2 = context.Students.Find(2);
std2.StudentName = "test" + r2;
context.SaveChanges();
}
}
}
I am using SQL Profiler to trace the transactions. I even added more updates to the second approach just to make that transaction's duration equal to the DbContextTransaction case thinking it might be the reason but still no luck! When I look at the trace, I see that updates belonging to a particular transaction start only after the previous transaction is committed. What could be the reason?
Upon further investigation, I found out that regadless of the order of changes I have made in the context, the order in which SaveChanges() method always sends update queries to the SQL Server is based on the primary key of the table. In other words, even though I try to reverse the order of update request by first changing row 2 and then row 1, SaveChanges() first executes the update query for row 1 and then for row 2. That's why I don't get a deadlock by using just SaveChanges() method. It does not reverse the order of the queries.

Can redis disable the replies for pipelined commands?

I'm currently developing a cache that needs increase a few hundred counters for every call like this:
redis.pipelined do
keys.each{ |key| redis.incr key }
end
In my profiling now I saw that the replies I don't need are still collected by the redis gem and waste some valueable time. Can I tell redis in some way that I'm not interested in the replies? Is there a better way to increment lots of values.
I didn't find a MINCR command, for example..
Thanks in advance!
Yes... in 2.6, at least. You could do this in a LUA script, and simply have the LUA script return an empty result. Here it is using the booksleeve client:
const int DB = 0; // any database number
// prime some initial values
conn.Keys.Remove(DB, new[] {"a", "b", "c"});
conn.Strings.Increment(DB, "b");
conn.Strings.Increment(DB, "c");
conn.Strings.Increment(DB, "c");
// run the script, passing "a", "b", "c", "c" to
// increment a & b by 1, c twice
var result = conn.Scripting.Eval(DB,
#"for i,key in ipairs(KEYS) do redis.call('incr', key) end",
new[] { "a", "b", "c", "c"}, // <== aka "KEYS" in the script
null); // <== aka "ARGV" in the script
// check the incremented values
var a = conn.Strings.GetInt64(DB, "a");
var b = conn.Strings.GetInt64(DB, "b");
var c = conn.Strings.GetInt64(DB, "c");
Assert.IsNull(conn.Wait(result), "result");
Assert.AreEqual(1, conn.Wait(a), "a");
Assert.AreEqual(2, conn.Wait(b), "b");
Assert.AreEqual(4, conn.Wait(c), "c");
Or to do the same thing with incrby, passing the "by" numbers as arguments, change the middle portion to:
// run the script, passing "a", "b", "c" and 1, 1, 2
// increment a & b by 1, c twice
var result = conn.Scripting.Eval(DB,
#"for i,key in ipairs(KEYS) do redis.call('incrby', key, ARGV[i]) end",
new[] { "a", "b", "c" }, // <== aka "KEYS" in the script
new object[] { 1, 1, 2 }); // <== aka "ARGV" in the script
No, this is not possible. There is no way to tell Redis to not reply.
The only way to avoid waiting synchronously for replies at some points is to run a fully asynchronous client (like node.js or hiredis in asynchronous mode).
Version 3.2 of Redis supports this explicitly:
https://redis.io/commands/client-reply
The CLIENT REPLY command controls whether the server will reply the client's commands. The following modes are available:
ON. This is the default mode in which the server returns a reply to every command.
OFF. In this mode the server will not reply to client commands.
SKIP. This mode skips the reply of command immediately after it.
Return value
When called with either OFF or SKIP subcommands, no reply is made. When called with ON:
Simple string reply: OK.

Getting deadlocks in sqlserver

I am getting deadlocks occasionally in sql server. I created a function for locking non database operations (credit card processing) so duplicates cannot happen. My functions are as follows (sorry for the tcl, but the sql is clear enough). Can anyone see why a deadlock happens occasionally?????
proc ims_syn_lock_object { db object {timeout 30} {wait 1}} {
if {[catch {
while {true} {
am_dbtransaction begin $db
# read the object locks that aren't timed out
set result [am_db1cell $db "SELECT object from GranularLocks WITH (ROWLOCK,HOLDLOCK) where object = [ns_dbquotevalue $object] AND timeActionMade > DATEADD(second,-timeout, GETDATE())"]
# check to see if this object is locked and not timed out
if { [string equal "" $result] } {
break;
} else {
# another process has this object and it is not timed out.
# release the row lock
am_dbtransaction rollback $db
if { $wait } {
# sleep for between 400 and 800 miliseconds
sleep [expr [ns_rand 400] + 400]
} else {
# we aren't waiting on locked resources.
return 0;
}
}
}
# either the object lock has timed out, or the object isn't locked
# create the object lock.
ns_db dml $db "DELETE FROM GranularLocks WHERE object = [ns_dbquotevalue $object]"
ns_db dml $db "INSERT INTO GranularLocks(object,timeout) VALUES ([ns_dbquotevalue $object],[ns_dbquotevalue $timeout int])"
# releases the row lock and commits the transaction
am_dbtransaction commit $db
} errMsg]} {
ns_log Notice "Could not lock $object. $errMsg"
catch {
am_dbtransaction rollback $db
} errMsg
return 0
}
return 1
}
proc ims_syn_unlock_object {db object } {
#simply remove the objects lock
ns_db dml $db "DELETE FROM GranularLocks WHERE object = [ns_dbquotevalue $object]"
}
Try adding UPDLOCK to the 1st select to force an exclusive lock too
Try sp_getapplock which is provided for this kind of operation.
I'd prefer number 2, personally...
It would be usefull to have the deadlock graph.
SQL deadlocks happen not only because of the queries involves, but equaly important is the schema involved. For example you can get a Reader-Writer deadlocks with perfectly valid and 'correct' queries simply because the read and write choose different access paths to the data. I could see this happening in your case if an index on timeActionMade exists on GranularLocks that does not cover 'object' column. But again, the solution will depend on what the actual deadlock is on.