DragonflyBSD: possible race-condition in lock manager (kern_lock.c) code? - bsd

Lately I've been reading the lock_manager (kern_lock.c) code, and bumped into some scenario which I think would create a race-condition.
Step 1:
#undo_shreq(...): if there is an upgrade request pending, the code will reset "LKC_UPREQ" flag and do the wakeup() call; this thing happen only if
(count & (LKC_EXREQ | LKC_UPREQ | LKC_CANCEL)) &&
(count & (LKC_SMASK | LKC_XMASK)) == 0)
Step 2:
Now, in-parallel, an another thread T2 is trying to get an exclusive lock and it reaches the trivial condition (i.e) #lockmgr_exclusive(...)
if ((count & (LKC_UPREQ | LKC_EXREQ |
LKC_XMASK)) == 0 &&
((count & LKC_SHARED) == 0 ||
(count & LKC_SMASK) == 0))
So, T2 increases the count by 1 and set itself as the owner thread-- meaning it got the exclusiveness.
Step 3:
A thread (T1) slept on a LKC_UPREQ flag woke by Step 1; and here is the code after sleep (....after, LK_SLEEPFAIL and sleep-error sanity check), #lockmgr_upgrade(...)
if ((count & LKC_UPREQ) == 0) { // reset by step 1
KKASSERT((count & LKC_XMASK) == 1); // true, by step 2
lkp->lk_lockholder = td;
break;
}
I see (please correct me if am wrong), at step 3 that, thread T1 resets the lk_lockholder to itself-- meaning, it got the exclusiveness!

The way it works is that if one thread sets UPREQ and then sleeps, another thread will grant it the exclusive lock and wake it up. The granting thread clears UPREQ and increments the exclusive count, but does not know 'who' set the UPREQ so it is then up to the thread that set UPREQ to set the lockholder field.
Since the lockmgr code must deal with both many-exclusive-to-single-shared starvation, many-shared-to-single-exclusive starvation cases, AND deadlock edge cases, it is fairly complex. Edge cases have been found over the last year or two, but this particular case doesn't look like a bug to me. Just a bit confusing because the exclusive lock is granted (UPREQ cleared and excl count incremented) by the second thread but the first thread is still responsible for setting the lk_lockholder field.

Related

is Sql.execute() synchronous or asynchronous with groovy?

I stumbled onto a bit of groovy code deleting rows from a database by batches of a given size:
for (int index = 0; index <= loopCnt; index++) {
sql.execute("DELETE TOP(" + maxEntriesToDeleteAtOnce + ") FROM LoaderQueue WHERE Status = ? AND LastUpdated < ?", stateToDelete, date)
}
I have 0 experience with groovy, and I know that the delete statement can take a few seconds to be fully executed, even if the batch size is relatively small. I'm wondering if the execution will wait for each statement to be completed before looping , or if we might send a few statements in parallel ? In short I'm wondering if the sql.execute() command is synchronous
I can't really try anything yet as there is no DEV environment for the application involved.

synchronization between 2 applications pooling a SQL table

I have 2 instances of a VB.NET application each running on their own dedicated servers. The said application runs a While true loop with a 5s sleep on IDLE (IDLE is when the Table doesn't have any ProcessQuery to be treated). On each iteration, the application questions a table in the SQL Database to know if there is anything it could process.
The problem is that i sometimes encounter the problem where both of the instances are "taking" the same ProcessQuery.
I'm using EntityFramework6. I have looked into EntityState but i don't think it does exactly what i'm trying to accomplish.
I was wondering what would be my solution to have perfect parallel instances. It's not impossible at some point i have 12 instances running on 12 machines.
Thanks!
Dim conn As New Info_IndusEntities()
Dim DemandeWilma As WilmaDemandes = conn.WilmaDemandes.Where(Function(x) x.Site = 'LONDON' AndAlso x.Statut = 'toProcess').OrderBy(Function(x) x.RequestDate).FirstOrDefault
If Not IsNothing(DemandeWilma) Then
DemandeWilma.Statut = Statuts.EnTraitement.ToString
DemandeWilma.ServerName = Environment.MachineName
DemandeWilma.ProcessDate = DateTime.Now
conn.SaveChanges()
Return DemandeWilma
end if
UPDATE (21/06/19)
I found an article that I find interesting.
I started by adding a column to my Table :
UPDATED (21/06/19)
I then refreshed my model and changed the Concurrency Check property of RowVersion column in my ORM :
When I tested the update, here's the log of EF6 :
UPDATE [dbo].[WilmaDemandes] SET [Statut] = #0, [ServerName] = #1,
[DateDebut] = #2 WHERE (([ID] = #3) AND ([RowVersion] = #4)) SELECT
[RowVersion] FROM [dbo].[WilmaDemandes] WHERE ##ROWCOUNT > 0 AND [ID]
= #3
-- #0: 'EnTraitement' (Type = String, Size = 20)
-- #1: 'TRB5995' (Type = String, Size = 20)
-- #2: '2019-06-25 7:31:01 AM' (Type = DateTime2)
-- #3: '124373' (Type = Int32)
-- #4: 'System.Byte[]' (Type = Binary, Size = 8)
-- Executing at 2019-06-25 7:31:24 AM -04:00
-- Completed in 95 ms with result: SqlDataReader
Closed connection at 2019-06-25 7:31:24 AM -04:00
Exception thrown:
'System.Data.Entity.Infrastructure.DbUpdateConcurrencyException' in
EntityFramework.dll
UPDATED (25/06/19)
The problems, as explained in this post, starts when you are using DB-First instead of Code-First. Your property will get overwritten silently as soon as you update the model. Some people back then coded a console app workaround that they run on pre-build. I'm not sure i'm quite ready to take this solution as final solution.
Interesting tutorial on how to test optimistic concurrency and ways to resolve such an exception.
Add an "owner" column to your queue table
Your application updates one record (TOP 1) and sets the owner value to their identifier (WHERE Owner IS NULL)
Now your application goes back and reads their owned rows and processes them
It's a simple pattern and it works great. If any processes happen to take ownership 'simultaneously', only one will actually get the reservation.
I'm not very good at LINQ so here's a brute force method, multiline for clarity:
// First try reserving a row
conn.Database.ExecuteSqlCommand(
"WITH UpdateTop1 AS
(SELECT TOP 1 * FROM WilmaDemandes
WHERE Owner IS NULL
AND Site = 'LONDON'
ORDER BY RequestDate)
UPDATE UpdateTop1 SET Owner='ThisApplication'"
);
// See if we got one
Dim DemandeWilma As WilmaDemandes =
conn.WilmaDemandes.
Where(x => x.Owner=='ThisApplication').FirstOrDefault
// If we got a row, process it. Otherwise Idle and repeat
There's also no reason that you must reserve one row. You could reserve all the free rows and work your way through them. Meanwhile other processes will pick up any subsequently arriving rows
Personally I would refactor your status column and make it NULL for new records ready to be processed, otherwise it's the worker ID that has reserved it.
It also helps to add things like timestamp columns to record when the row was reserved etc.

What is the race condition for Redis INCR Rate Limiter 2?

I have read the INCR documentation here but I could not understand why the Rate limiter 2 has a race condition.
In addition, what does it mean by the key will be leaked until we'll see the same IP address again in the documentation?
Can anyone help explain? Thank you very much!
You are talking about the following code, which has two problems in multiple-threaded environment.
1. FUNCTION LIMIT_API_CALL(ip):
2. current = GET(ip)
3. IF current != NULL AND current > 10 THEN
4. ERROR "too many requests per second"
5. ELSE
6. value = INCR(ip)
7. IF value == 1 THEN
8. EXPIRE(ip,1)
9. END
10. PERFORM_API_CALL()
11.END
the key will be leaked until we'll see the same IP address again
If the client dies, e.g. client is killed or machine is down, before executing LINE 8. Then the key ip won't be set an expiration. If we'll never see this ip again, this key will always persist in Redis database, and is leaked.
Rate limiter 2 has a race condition
Suppose key ip doesn't exist in the database. If there are more than 10 clients, say, 20 clients, execute LINE 2 simultaneously. All of them will get a NULL current, and they all will go into the ELSE clause. Finally all these clients will execute LINE 10, and the API will be called more than 10 times.
This solution fails, because these's a time window between LINE 2 and LINE 3.
A Correct Solution
value = INCR(ip)
IF value == 1 THEN
EXPIRE(ip, 1)
END
IF value <= 10 THEN
return true
ELSE
return false
END
Wrap the above code into a Lua script to ensure it runs atomically. If this script returns true, perform the API call. Otherwise, do nothing.

jdbc sampler with while controller

I want to post some bulk messages. System takes some time to process them, so i do not want to proceed for 2nd iteration. My setup is something like this
While controller->jdbc request->beanshell postprocessor
In While controller, condition is ${__java script("${check_1}" != "0")}
check is the variable name as part of database sampler which checks whether all the messages are processed. Its a count, if it is 0 we have to stop looping.
As part of Bean Shell Post Processor, i have added a condition to wait if count is not equal to 0.
if(${check_1} != 0) {
out("check Count not zero, waiting for 5 sec " + ${check_1});
Thread.sleep(5000);
}else
out("check Count is zero " + ${check_1});
Whats happening is, the result is something like this
If the check_1 is > 0 , it waits for 5 sec and as soon as it is 0, it runs into infinite loop by executing the sampler multiple times
Is there something wrong with the condition. Please suggest if you have any other solution.
The correct way to use __javaScript() function and define condition is:
${__javaScript(${check_1} != 0,)}
The correct way of accessing JMeter Variables from Beanshell is:
if(vars.get("check_1").equals("0"))
Hope this helps.

Transactions and watch statement in Redis

Could you please explain me following example from "The Little Redis Book":
With the code above, we wouldn't be able to implement our own incr
command since they are all executed together once exec is called. From
code, we can't do:
redis.multi()
current = redis.get('powerlevel')
redis.set('powerlevel', current + 1)
redis.exec()
That isn't how Redis transactions work. But, if we add a watch to
powerlevel, we can do:
redis.watch('powerlevel')
current = redis.get('powerlevel')
redis.multi()
redis.set('powerlevel', current + 1)
redis.exec()
If another client changes the value of powerlevel after we've called
watch on it, our transaction will fail. If no client changes the
value, the set will work. We can execute this code in a loop until it
works.
Why we can't execute increment in transaction that can't be interrupted by other command? Why we need to iterate instead and wait until nobody changes value before transaction starts?
There are several questions here.
1) Why we can't execute increment in transaction that can't be interrupted by other command?
Please note first that Redis "transactions" are completely different than what most people think transactions are in classical DBMS.
# Does not work
redis.multi()
current = redis.get('powerlevel')
redis.set('powerlevel', current + 1)
redis.exec()
You need to understand what is executed on server-side (in Redis), and what is executed on client-side (in your script). In the above code, the GET and SET commands will be executed on Redis side, but assignment to current and calculation of current + 1 are supposed to be executed on client side.
To guarantee atomicity, a MULTI/EXEC block delays the execution of Redis commands until the exec. So the client will only pile up the GET and SET commands in memory, and execute them in one shot and atomically in the end. Of course, the attempt to assign current to the result of GET and incrementation will occur well before. Actually the redis.get method will only return the string "QUEUED" to signal the command is delayed, and the incrementation will not work.
In MULTI/EXEC blocks you can only use commands whose parameters can be fully known before the begining of the block. You may want to read the documentation for more information.
2) Why we need to iterate instead and wait until nobody changes value before transaction starts?
This is an example of concurrent optimistic pattern.
If we used no WATCH/MULTI/EXEC, we would have a potential race condition:
# Initial arbitrary value
powerlevel = 10
session A: GET powerlevel -> 10
session B: GET powerlevel -> 10
session A: current = 10 + 1
session B: current = 10 + 1
session A: SET powerlevel 11
session B: SET powerlevel 11
# In the end we have 11 instead of 12 -> wrong
Now let's add a WATCH/MULTI/EXEC block. With a WATCH clause, the commands between MULTI and EXEC are executed only if the value has not changed.
# Initial arbitrary value
powerlevel = 10
session A: WATCH powerlevel
session B: WATCH powerlevel
session A: GET powerlevel -> 10
session B: GET powerlevel -> 10
session A: current = 10 + 1
session B: current = 10 + 1
session A: MULTI
session B: MULTI
session A: SET powerlevel 11 -> QUEUED
session B: SET powerlevel 11 -> QUEUED
session A: EXEC -> success! powerlevel is now 11
session B: EXEC -> failure, because powerlevel has changed and was watched
# In the end, we have 11, and session B knows it has to attempt the transaction again
# Hopefully, it will work fine this time.
So you do not have to iterate to wait until nobody changes the value, but rather to attempt the operation again and again until Redis is sure the values are consistent and signals it is successful.
In most cases, if the "transactions" are fast enough and the probability to have contention is low, the updates are very efficient. Now, if there is contention, some extra operations will have to be done for some "transactions" (due to the iteration and retries). But the data will always be consistent and no locking is required.