Using multiple threads for DB updates results in higher write time per update - sql

So I have a script that is supposed to update a giant table (Postgres). Since the table has about 150m rows and I want to complete this as fast as possible, using multiple threads seemed like a perfect answer. However, I'm seeing something very weird.
When I use a single thread, the write time to an update is much much lower than when I use multiple threads.
require 'sequel'
.....
DB = Sequel.connect(DB_CREDS)
queue = Queue.new
read_query = query = DB["
SELECT id, extra_fields
FROM objects
WHERE XYZ IS FALSE
"]
read_query.use_cursor(:rows_per_fetch => 1000).each do |row|
queue.push(row)
end
Up until this point, IMO it shouldn't matter because we're just reading stuff from the DB and it has nothing to do with writing. From here, I've tried two approaches. Single-threaded and Multi-threaded.
NOTE - This is not the actual UPDATE query that I want to execute, it's just a pseudo one for demonstration purposes. The actual query is a lot longer and plays with JSON and stuff so I can't really update the entire table using a single query.
Single-threaded
until queue.empty?
photo = queue.shift
id = photo[:id]
update_query = DB["
UPDATE objects
SET XYZ = TRUE
WHERE id = #{id}
"]
result = update_query.update
end
If I execute this, I see in my DB logs that each update query takes time less than 0.01 seconds
I, [2016-08-15T10:45:48.095324 #54495] INFO -- : (0.001441s) UPDATE
objects SET XYZ = TRUE WHERE id = 84395179
I, [2016-08-15T10:45:48.103818 #54495] INFO -- : (0.008331s) UPDATE
objects SET XYZ = TRUE WHERE id = 84395181
I, [2016-08-15T10:45:48.106741 #54495] INFO -- : (0.002743s) UPDATE
objects SET XYZ = TRUE WHERE id = 84395182
Multi-threaded
MAX_THREADS = 5
num_threads = 0
all_threads = []
until queue.empty?
if num_threads < MAX_THREADS
photo = queue.shift
num_threads += 1
all_threads << Thread.new {
id = photo[:id]
update_query = DB["
UPDATE photos
SET cv_tagged = TRUE
WHERE id = #{id}
"]
result = update_query.update
num_threads -= 1
Thread.exit
}
end
end
all_threads.each do |thread|
thread.join
end
Now, in theory it should be faster right? But each update takes about 0.5 seconds. I'm so surprised what that is the case.
I, [2016-08-15T11:02:10.992156 #54583] INFO -- : (0.414288s)
UPDATE objects
SET XYZ = TRUE
WHERE id = 119498834
I, [2016-08-15T11:02:11.097004 #54583] INFO -- : (0.622775s)
UPDATE objects
SET XYZ = TRUE
WHERE id = 119498641
I, [2016-08-15T11:02:11.097074 #54583] INFO -- : (0.415521s)
UPDATE objects
SET XYZ = TRUE
WHERE id = 119498826
Any ideas on -
Why this is happening?
How can I increase the update speed for multiple threads approach.

Have you configured Sequel so that it has a connection pool of 5 connections?
Have you considered doing multiple updates per call via an IN clause?
If you haven't done 1, you have N threads fighting over N-n connections, which equates to resource starvation, which is a classic concurrency issue.

Your example can be reduced to: DB[:objects].where(:XYZ=>false).update(:XYZ=>true)
I'm guessing your actual need is not that simple. But the same approach may still work. Instead of issuing a query per row, use a single query to update all related rows.

I went through something similar on a project ("import all history from a legacy database into a new one with completely different structure and organization"). Unless you managed to shoot yourself in the foot somewhere else, you have 2 basic bottlenecks to look for:
the database's disk IO
the ruby process' CPU
Some suggestions,
database IO: use DB transactions, update 1000 records per transaction (you can tweak the exact number but 1000 is usually good) - huge DB table usually means a lot of indexes too, every couple of update actions will trigger a REINDEX and AUTOVACUUM actions within the DB which will result in a significant drop of update speed, a transaction basically allows you to push a 1000 updated records without REINDEX and AUTOVACUUM and then perform both actions, the result is MUCH faster (something like an order of magnitude)
database IO: change indexes, drop every index you can live without during the update process, ideally you will have only 1 very streamlined index which allows unique row lookups for update purposes
ruby CPU: unless you are using JRuby or Rubinius, or REALLY paying the price of network latency to your DB, threads will do you no big benefit, use fork/processes (see GIL). You did a great job choosing Sequel over AR for this
ruby CPU: if you decide to go threads + JRuby with this don't forget to try and plug in jProfiler, it's amazing at tracing bottlenecks in Java and author of SideKiq swears it is amazing for JRuby too - unfortunately, afaik, there is no equivalent of jProfiler for C Ruby (there are profiling tools, but nowhere as useful)
After you implement these suggestions you know you did all you could when:
all of the CPUs on the Ruby box are on 100% load
the hard disk IO of the DB is on 100% throughput
Find this sweet spot and don't add additional ruby update threads/processes after that (or add more hardware) and that's that
PS check out https://github.com/ruby-concurrency/concurrent-ruby - it's a great parallelization lib

Related

synchronization between 2 applications pooling a SQL table

I have 2 instances of a VB.NET application each running on their own dedicated servers. The said application runs a While true loop with a 5s sleep on IDLE (IDLE is when the Table doesn't have any ProcessQuery to be treated). On each iteration, the application questions a table in the SQL Database to know if there is anything it could process.
The problem is that i sometimes encounter the problem where both of the instances are "taking" the same ProcessQuery.
I'm using EntityFramework6. I have looked into EntityState but i don't think it does exactly what i'm trying to accomplish.
I was wondering what would be my solution to have perfect parallel instances. It's not impossible at some point i have 12 instances running on 12 machines.
Thanks!
Dim conn As New Info_IndusEntities()
Dim DemandeWilma As WilmaDemandes = conn.WilmaDemandes.Where(Function(x) x.Site = 'LONDON' AndAlso x.Statut = 'toProcess').OrderBy(Function(x) x.RequestDate).FirstOrDefault
If Not IsNothing(DemandeWilma) Then
DemandeWilma.Statut = Statuts.EnTraitement.ToString
DemandeWilma.ServerName = Environment.MachineName
DemandeWilma.ProcessDate = DateTime.Now
conn.SaveChanges()
Return DemandeWilma
end if
UPDATE (21/06/19)
I found an article that I find interesting.
I started by adding a column to my Table :
UPDATED (21/06/19)
I then refreshed my model and changed the Concurrency Check property of RowVersion column in my ORM :
When I tested the update, here's the log of EF6 :
UPDATE [dbo].[WilmaDemandes] SET [Statut] = #0, [ServerName] = #1,
[DateDebut] = #2 WHERE (([ID] = #3) AND ([RowVersion] = #4)) SELECT
[RowVersion] FROM [dbo].[WilmaDemandes] WHERE ##ROWCOUNT > 0 AND [ID]
= #3
-- #0: 'EnTraitement' (Type = String, Size = 20)
-- #1: 'TRB5995' (Type = String, Size = 20)
-- #2: '2019-06-25 7:31:01 AM' (Type = DateTime2)
-- #3: '124373' (Type = Int32)
-- #4: 'System.Byte[]' (Type = Binary, Size = 8)
-- Executing at 2019-06-25 7:31:24 AM -04:00
-- Completed in 95 ms with result: SqlDataReader
Closed connection at 2019-06-25 7:31:24 AM -04:00
Exception thrown:
'System.Data.Entity.Infrastructure.DbUpdateConcurrencyException' in
EntityFramework.dll
UPDATED (25/06/19)
The problems, as explained in this post, starts when you are using DB-First instead of Code-First. Your property will get overwritten silently as soon as you update the model. Some people back then coded a console app workaround that they run on pre-build. I'm not sure i'm quite ready to take this solution as final solution.
Interesting tutorial on how to test optimistic concurrency and ways to resolve such an exception.
Add an "owner" column to your queue table
Your application updates one record (TOP 1) and sets the owner value to their identifier (WHERE Owner IS NULL)
Now your application goes back and reads their owned rows and processes them
It's a simple pattern and it works great. If any processes happen to take ownership 'simultaneously', only one will actually get the reservation.
I'm not very good at LINQ so here's a brute force method, multiline for clarity:
// First try reserving a row
conn.Database.ExecuteSqlCommand(
"WITH UpdateTop1 AS
(SELECT TOP 1 * FROM WilmaDemandes
WHERE Owner IS NULL
AND Site = 'LONDON'
ORDER BY RequestDate)
UPDATE UpdateTop1 SET Owner='ThisApplication'"
);
// See if we got one
Dim DemandeWilma As WilmaDemandes =
conn.WilmaDemandes.
Where(x => x.Owner=='ThisApplication').FirstOrDefault
// If we got a row, process it. Otherwise Idle and repeat
There's also no reason that you must reserve one row. You could reserve all the free rows and work your way through them. Meanwhile other processes will pick up any subsequently arriving rows
Personally I would refactor your status column and make it NULL for new records ready to be processed, otherwise it's the worker ID that has reserved it.
It also helps to add things like timestamp columns to record when the row was reserved etc.

How to prevent django from loading objects in memory when using `delete()`?

I'm having memory issues because it looks like Django is loading the objects into memory when using delete(). Is there any way to prevent Django from doing that?
From the Django docs:
Django needs to fetch objects into memory to send signals and handle cascades. However, if there are no cascades and no signals, then Django may take a fast-path and delete objects without fetching into memory. For large deletes this can result in significantly reduced memory usage. The amount of executed queries can be reduced, too.
https://docs.djangoproject.com/en/1.8/ref/models/querysets/#delete
I don't use signals. I do have foreign keys on the model I'm trying to delete, but I don't see why Django would need to load the objects into memory. It looks like it does, because my memory is rising as the query runs.
You can use a function like this to iterate over an huge number of objects without using too much memory:
import gc
def queryset_iterator(qs, batchsize = 500, gc_collect = True):
iterator = qs.values_list('pk', flat=True).order_by('pk').distinct().iterator()
eof = False
while not eof:
primary_key_buffer = []
try:
while len(primary_key_buffer) < batchsize:
primary_key_buffer.append(iterator.next())
except StopIteration:
eof = True
for obj in qs.filter(pk__in=primary_key_buffer).order_by('pk').iterator():
yield obj
if gc_collect:
gc.collect()
Then you can use the function to iterate over the objects to delete:
for obj in queryset_iterator(HugeQueryset.objects.all()):
obj.delete()
For more information you can check this blog post.
You can import django database connection and use it with sql to delete. I had exact same problem as you do and this helps me a lot. Here's some snippet(I'm using mysql by the way, but you can run any sql statement):
from django.db import connection
sql_query = "DELETE FROM usage WHERE date < '%s' ORDER BY date" % date
cursor = connection.cursor()
try:
cursor.execute(sql_query)
finally:
c.close()
This should execute only the delete operation on that table without affecting any of your model relationships.

getgroup() is very slow

I am using the function getgroup() to read all of the groups of a user in the active directory.
I'm not sure if I'm doing something wrong but it is very very slow. Each time it arrives at this point, it takes several seconds. I'm also accessing the rest of Active directory using the integrated function of "Accountmanagement" and it executes instantly.
Here's the code:
For y As Integer = 0 To AccountCount - 1
Dim UserGroupArray As PrincipalSearchResult(Of Principal) = UserResult(y).GetGroups()
UserInfoGroup(y) = New String(UserGroupArray.Count - 1) {}
For i As Integer = 0 To UserGroupArray.Count - 1
UserInfoGroup(y)(i) = UserGroupArray(i).ToString()
Next
Next
Later on...:
AccountChecker_Listview.Groups.Add(New ListViewGroup(Items(y, 0), HorizontalAlignment.Left))
For i As Integer = 0 To UserInfoGroup(y).Count - 1
AccountChecker_Listview.Items.Add(UserInfoGroup(y)(i)).Group = AccountChecker_Listview.Groups(y)
Next
Item(,) contains my normal Active directory data that I display Item(y, 0) contain the username.
y is the number of user accounts in AD. I also have some other code for the other information in this loop but it's not the issue here.
Anyone know how to make this goes faster or if there is another solution?
I'd recommend trying to find out where the time is spent. One option is to use a profiler, either the one built into Visual Studio or a third-party profiler like Redgate's Ants Profiler or the Yourkit .Net Profiler.
Another is to trace the time taken using the System.Diagnostics.Stopwatch class and use the results to guide your optimization efforts. For example time the function that retrieves data from Active Directory and separately time the function that populates the view to narrow down where the bottleneck is.
If the bottleneck is in the Active Directory lookup you may want to consider running the operation asynchronously so that the window is not blocked and populates as new data is retrieved. If it's in the listview you may want to consider for example inserting the data in a batch operation.

how to update Rails.cache TTL on read/fetch

I would like to keep my objects in the Rails cache so long as there is a read within some interval (say 10 minutes). I can successfully set a TTL on cache object creation with:
Rails.cache.fetch('key', expires_in: 10.minutes) do
some_expensive_operation
end
But I notice that subsequent cache reads for 'key' do not up the TTL (at least not in my setup, which is Rails 3.2 + Redis).
Is there a way to have Rails.cache.{fetch,read} re-up the TTL for cache hits?
My alternative is to do the following, which seems potentially wasteful:
result = Rails.cache.read('key') || some_expensive_operation
Rails.cache.write('key', result, expires_in: 10.minutes)
Here's a redis-specific solution I came up with:
def fetch(key, ttl_seconds)
cached_value = $redis.get(key)
if cached_value
# re-up the TTL
$redis.expire(key, ttl_seconds)
result = Marshal.load(cached_value)
else
result = yield
$redis.setex(key, ttl_seconds, Marshal.dump(result))
end
result
end
fetch('key', 10.minutes) do
some_expensive_operation
end
I'd prefer something that's cache-provider neutral, but that just doesn't seem to be part of the Rails caching API, unless I've missed something.
As I understand you want update TTL on fetch just to make cache store only frequently used records. I guess the reason is cache size.
Just don't do that! Let cache engine to decide what records to keep. If you're using Memcached it does this job out of box, you can never bother deleting old records. If you're using Redis you need to configure it as an LRU cache.

Doing an atomic update of the first instance in a QuerySet

I'm working on a system which has to handle a number of race-conditions when serving jobs to a number of worker-machines.
The clients would query the system for jobs with status='0' (ToDo), then, in an atomic way, update the 'oldest' row with status='1' (Locked) and retrieve the id for that row (for updating the job with worker information like which machine is working on it etc.).
The main issue here is that there might be any number of clients updating at the same time. A solution would be to lock around 20 of the rows with status='0', update the oldest one and release all the locks again afterwards. I've been looking into the TransactionMiddleware but I don't see how this would prevent the case of the oldest one being updated from under me after I query it.
I've looked into the QuerySet.update() thing, and it looks promising, but in the case of two clients getting a hold of the same record, the status would simply updated, and we would have two workers working on the same job.. I'm really at a loss here.
I also found ticket #2705 which seems to handle the case nicely, but I have no idea how to get the code from there because of my limited SVN experience (the last updates are simply diffs, but I don't know how to merge that with the trunk of the code).
Code: Result = Job
class Result(models.Model):
"""
Result: completed- and pending runs
'ToDo': job hasn't been acquired by a client
'Locked': job has been acquired
'Paused'
"""
# relations
run = models.ForeignKey(Run)
input = models.ForeignKey(Input)
PROOF_CHOICES = (
(1, 'Maybe'),
(2, 'No'),
(3, 'Yes'),
(4, 'Killed'),
(5, 'Error'),
(6, 'NA'),
)
proof_status = models.IntegerField(
choices=PROOF_CHOICES,
default=6,
editable=False)
STATUS_CHOICES = (
(0, 'ToDo'),
(1, 'Locked'),
(2, 'Done'),
)
result_status = models.IntegerField(choices=STATUS_CHOICES, editable=False, default=0)
# != 'None' => status = 'Done'
proof_data = models.FileField(upload_to='results/',
null=True, blank=True)
# part of the proof_data
stderr = models.TextField(editable=False,
null=True, blank=True)
realtime = models.TimeField(editable=False,
null=True, blank=True)
usertime = models.TimeField(editable=False,
null=True, blank=True)
systemtime = models.TimeField(editable=False,
null=True, blank=True)
# updated when client sets status to locked
start_time = models.DateTimeField(editable=False)
worker = models.ForeignKey('Worker', related_name='solved',
null=True, blank=True)
To merge #2705 into your django, you need to download it first:
cd <django-dir>
wget http://code.djangoproject.com/attachment/ticket/2705/for_update_11366_cdestigter.diff?format=raw
then rewind svn to the necessary django version:
svn update -r11366
then apply it:
patch -p1 for_update_11366_cdestigter.diff
It will inform you which files were patched successfully and which were not. In the unlikely case of conflicts you can fix them manually looking at http://code.djangoproject.com/attachment/ticket/2705/for_update_11366_cdestigter.diff
To unapply the patch, just write
svn revert --recursive .
If your django is running on one machine, there is a much simpler way to do it... Excuse the pseudo-code as the details of your implementation aren't clear.
from threading import Lock
workers_lock = Lock()
def get_work(request):
workers_lock.acquire()
try:
# Imagine this method exists for brevity
work_item = WorkItem.get_oldest()
work_item.result_status = 1
work_item.save()
finally:
workers_lock.release()
return work_item
You have two choices off the top of my head. One is to lock rows immediately upon retrieval and only release the lock once the appropriate one has been marked as in use. The problem here is that no other client process can even look at the jobs which don't get selected. If you're always just automatically selecting the last one then it may be a brief enough of a window to be o.k. for you.
The other option would be to bring back the rows that are open at the time of the query, but to then check again whenever the client tries to grab a job to work with. When a client attempts to update a job to work on it a check would first be done to see if it's still available. If someone else has already grabbed it then a notification would be sent back to the client. This allows all of the clients to see all of the jobs as snapshots, but if they are constantly grabbing the latest one then you might have the clients constantly receiving notifications that a job is already in use. Maybe this is the race condition to which you're referring?
One way to get around that would be to return the jobs in specific groups to the clients so that they are not always getting the same lists. For example, break them down by geographic area or even just randomly. For example, each client could have an ID of 0 to 9. Take the mod of an ID on the jobs and send back those jobs with the same ending digit to the client. Don't limit it to just those jobs though, as you don't want there to be jobs that you can't reach. So for example if you had clients of 1, 2, and 3 and a job of 104 then no one would be able to get to it. So, once there aren't enough jobs with the correct ending digit jobs would start coming back with other digits just to fill the list. You might need to play around with the exact algorithm here, but hopefully this gives you an idea.
How you lock the rows in your database in order to update them and/or send back the notifications will largely depend on your RDBMS. In MS SQL Server you could wrap all of that work nicely in a stored procedure as long as user intervention isn't needed in the middle of it.
I hope this helps.