When Does Django Perform the Database Lookup? - sql

From the following code:
dvdList = Dvd.objects.filter(title = someDvdTitle)[:10]
for dvd in dvdList:
result = "Title: "+dvd.title+" # "+dvd.price+"."
When does Django do the lookup? Maybe it's just paranoia, but it seems if I comment out the for loop, it returns a lot quicker. Is the first line setting up a filter and then the for loop executes it, or am I completely muddled up? What actually happens with those lines of code?
EDIT:
What would happen if I limited the objects.filter to '1000' and then implemented a counter in the for loop that broke out of it after 10 iterations. Would that effectively only get 10 values or 1000?

Django querysets are evaluated lazily, so yes, the query won't actually be executed until you try and get values out of it (as you're doing in the for loop).
From the docs:
You can evaluate a QuerySet in the following ways:
Iteration. A QuerySet is iterable, and
it executes its database query the
first time you iterate over it. For
example, this will print the headline
of all entries in the database:
for e in Entry.objects.all():
print e.headline
...(snip)...
See When Querysets are evaluated.
Per your edit:
If you limited the filter to 1000 and then implemented a counter in the for loop that broke out of it after 10 iterations, then you'd hit the database for all 1000 rows - Django has no way of knowing ahead of time exactly what you're going to do with the Queryset - it just knows that you want some data out of it, so evaluates the query string it's built up.

It may be also good to evaluate all at once using list() or any other method of eval of the query. I find it to boost performance sometimes (no paying for the DB connections every time).
Find more info about when django evaluates here.

Related

Django - Iterating over Raw Query is slow

I have a query which uses a window function. I am using a raw query to filter over that new field, since django doesn't allow filtering over that window function (at least in the version I am using).
So it would look something like this (simplified):
# Returns 440k lines
user_files = Files.objects.filter(file__deleted=False).filter(user__terminated__gte=today).annotate(
row_number=Window(expression=RowNumber(), partition_by=[F("user")], order_by=[F("creation_date").desc()]))
I am basically trying to get the last not deleted file from each user which is not terminated.
Afterwards I use following raw query to get what I want:
# returns 9k lines
sql, params = user_files.query.sql_with_params()
latest_user_files = Files.objects.raw(f'select * from ({sql}) sq where row_number = 1', params)
if I run these queries in the database, they run quite quickly (300ms). But once I try to iterate over them or even just print them it takes a very long time to execute.
Anywhere from 100 to 200 seconds even though the query itself just takes a little bit less than half a second. Is there anything I am missing? Is the extra field row_number in the raw query an issue?
Thank you for any hint/answers.
(Using Django 3.2 and Python 3.9)

How to check if a value is in an array without the use of a For loop?

Is this possible and/or recommended? Currently, the issue I'm having is that the processing time of this code I have checks for a value in an array of ~40 for a value, once it finds it we set a boolean. This same for loop is called up to 20 times so I was wondering if there was a way I could optimize this code in a better way to where I don't need to have several for loops checking for a single answer.
Here's an example of the code
For i = 0 to iCount 'iCount up to 40
If name = UCase((m_Array(i, 1))) Then
<logic>
End If
Next
Above is an example of what I"m looking at, this little chunk of code checks the array which is prepopulated prior to running this function and is usually around 30-40 items in the array. With this being called up to 20 times I feel I could reduce the amount of time it takes to run this if I could find another way to do it without the use of so many for loops.
LINQ provides a Contains extension method, which returns a Boolean, but that won't work for multidimensional arrays. Even if it did work, if performance is the concern, then Contains wouldn't help much since, internally, all the Contains method does is loop through the items until it finds the matching item.
One way to make it faster, is to use an Exit For statement to exit the loop once the first matching item is found. At least then it won't continue searching through the rest of the items after it finds the one for which it was looking:
For i = 0 to iCount 'iCount up to 40
If name = UCase((m_Array(i, 1))) Then
' logic...
Exit For
End If
Next
If you don't want it to have to search through the array at all, you would need to index your data. The simplest way to index your data is with a hash table. The Dictionary class is an easy to use implementation of a hash table. However, in the end, a hash table (just like any other indexing method) will only help performance if the situation is right. In this situation, where the array only contains 40 or so items, it's quite possible that a hash table will be slower. The only way to know for sure is to test it both ways and see if it makes any difference.
You're currently searching the list repeatedly and doing something if a member has certain properties. Instead you could check the properties of an item once, when you add an item to the list, and perform the logic then. Instead of repeating the tests it's then only done once. No searching at all is better than even the fastest search.

Memory efficient (constant) and speed optimized iteration over a large table in Django

I have a very large table.
It's currently in a MySQL database.
I use django.
I need to iterate over each element of the table to pre-compute some particular data (maybe if I was better I could do otherwise but that's not the point).
I'd like to keep the iteration as fast as possible with a constant usage of memory.
As it is already clearly in Limiting Memory Use in a *Large* Django QuerySet and Why is iterating through a large Django QuerySet consuming massive amounts of memory?, a simple iteration over all objects in django will kill the machine as it will retrieve ALL objects from the database.
Towards a solution
First of all, to reduce your memory consumption you should be sure DEBUG is False (or monkey patch the cursor: turn off SQL logging while keeping settings.DEBUG?) to be sure django isn't storing stuff in connections for debug.
But even with that,
for model in Model.objects.all()
is a no go.
Not even with the slightly improved form:
for model in Model.objects.all().iterator()
Using iterator() will save you some memory by not storing the result of the cache internally (though not necessarily on PostgreSQL!); but will still retrieve the whole objects from the database, apparently.
A naive solution
The solution in the first question is to slice the results based on a counter by a chunk_size. There are several ways to write it, but basically they all come down to an OFFSET + LIMIT query in SQL.
something like:
qs = Model.objects.all()
counter = 0
count = qs.count()
while counter < count:
for model in qs[counter:counter+chunk_size].iterator()
yield model
counter += chunk_size
While this is memory efficient (constant memory usage proportional to chunk_size), it's really poor in term of speed: as OFFSET grows, both MySQL and PostgreSQL (and likely most DBs) will start choking and slowing down.
A better solution
A better solution is available in this post by Thierry Schellenbach.
It filters on the PK, which is way faster than offsetting (how fast probably depends on the DB)
pk = 0
last_pk = qs.order_by('-pk')[0].pk
queryset = qs.order_by('pk')
while pk < last_pk:
for row in qs.filter(pk__gt=pk)[:chunksize]:
pk = row.pk
yield row
gc.collect()
This is starting to get satisfactory. Now Memory = O(C), and Speed ~= O(N)
Issues with the "better" solution
The better solution only works when the PK is available in the QuerySet.
Unluckily, that's not always the case, in particular when the QuerySet contains combinations of distinct (group_by) and/or values (ValueQuerySet).
For that situation the "better solution" cannot be used.
Can we do better?
Now I'm wondering if we can go faster and avoid the issue regarding QuerySets without PK.
Maybe using something that I found in other answers, but only in pure SQL: using cursors.
Since I'm quite bad with raw SQL, in particular in Django, here comes the real question:
how can we build a better Django QuerySet Iterator for large tables
My take from what I've read is that we should use server-side cursors (apparently (see references) using a standard Django Cursor would not achieve the same result, because by default both python-MySQL and psycopg connectors cache the results).
Would this really be a faster (and/or more efficient) solution?
Can this be done using raw SQL in django? Or should we write specific python code depending on the database connector?
Server Side cursors in PostgreSQL and in MySQL
That's as far as I could get for the moment...
a Django chunked_iterator()
Now, of course the best would have this method work as queryset.iterator(), rather than iterate(queryset), and be part of django core or at least a pluggable app.
Update Thanks to "T" in the comments for finding a django ticket that carry some additional information. Differences in connector behaviors make it so that probably the best solution would be to create a specific chunked method rather than transparently extending iterator (sounds like a good approach to me).
An implementation stub exists, but there hasn't been any work in a year, and it does not look like the author is ready to jump on that yet.
Additional Refs:
Why does MYSQL higher LIMIT offset slow the query down?
How can I speed up a MySQL query with a large offset in the LIMIT clause?
http://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/
postgresql: offset + limit gets to be very slow
Improving OFFSET performance in PostgreSQL
http://www.depesz.com/2011/05/20/pagination-with-fixed-order/
How to get a row-by-row MySQL ResultSet in python Server Side Cursor in MySQL
Edits:
Django 1.6 is adding persistent database connections
Django Database Persistent Connections
This should facilitate, under some conditions, using cursors. Still it's outside my current skills (and time to learn) how to implement such a solution..
Also, the "better solution" definitely does not work in all situations and cannot be used as a generic approach, only a stub to be adapted case by case...
Short Answer
If you are using PostgreSQL or Oracle, you can use, Django's builtin iterator:
queryset.iterator(chunk_size=1000)
This causes Django to use server-side cursors and not cache models as it iterates through the queryset. As of Django 4.1, this will even work with prefetch_related.
For other databases, you can use the following:
def queryset_iterator(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
for obj in page:
yield obj
pk = obj.pk
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
If you want to get back pages rather than individual objects to combine with other optimizations such as bulk_update, use this:
def queryset_to_pages(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
yield page
pk = max(obj.pk for obj in page)
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
Performance Profiling on PostgreSQL
I profiled a number of different approaches on a PostgreSQL table with about 200,000 rows on Django 3.2 and Postgres 13. For every query, I added up the sum of the ids, both to ensure that Django was actually retrieving the objects and so that I could verify correctness of iteration between queries. All of the timings were taken after several iterations over the table in question to minimize caching advantages of later tests.
Basic Iteration
The basic approach is just iterating over the table. The main issue with this approach is that the amount of memory used is not constant; it grows with the size of the table, and I've seen this run out of memory on larger tables.
x = sum(i.id for i in MyModel.objects.all())
Wall time: 3.53 s, 22MB of memory (BAD)
Django Iterator
The Django iterator (at least as of Django 3.2) fixes the memory issue with minor performance benefit. Presumably this comes from Django spending less time managing cache.
assert sum(i.id for i in MyModel.objects.all().iterator(chunk_size=1000)) == x
Wall time: 3.11 s, <1MB of memory
Custom Iterator
The natural comparison point is attempting to do the paging ourselves by progresively increased queries on the primary key. While this is an improvement over naieve iteration in that it has constant memory, it actually loses to Django's built-in iterator on speed because it makes more database queries.
def queryset_iterator(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
for obj in page:
yield obj
pk = obj.pk
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
assert sum(i.id for i in queryset_iterator(MyModel.objects.all())) == x
Wall time: 3.65 s, <1MB of memory
Custom Paging Function
The main reason to use the custom iteration is so that you can get the results in pages. This function is very useful to then plug in to bulk-updates while only using constant memory. It's a bit slower than queryset_iterator in my tests and I don't have a coherent theory as to why, but the slowdown isn't substantial.
def queryset_to_pages(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
yield page
pk = max(obj.pk for obj in page)
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
assert sum(i.id for page in queryset_to_pages(MyModel.objects.all()) for i in page) == x
Wall time: 4.49 s, <1MB of memory
Alternative Custom Paging Function
Given that Django's queryset iterator is faster than doing paging ourselves, the queryset pager can be alternately implemented to use it. It's a little bit faster than doing paging ourselves, but the implementation is messier. Readability matters, which is why my personal preference is the previous paging function, but this one can be better if your queryset doesn't have a primary key in the results (for whatever reason).
def queryset_to_pages2(queryset, page_size=1000):
page = []
page_count = 0
for obj in queryset.iterator():
page.append(obj)
page_count += 1
if page_count == page_size:
yield page
page = []
page_count = 0
yield page
assert sum(i.id for page in queryset_to_pages2(MyModel.objects.all()) for i in page) == x
Wall time: 4.33 s, <1MB of memory
Bad Approaches
The following are approaches you should never use (many of which are suggested in the question) along with why.
Do NOT Use Slicing on an Unordered Queryset
Whatever you do, do NOT slice an unordered queryset. This does not correctly iterate over the table. The reason for this is that the slice operation does a SQL limit + offset query based on your queryset and that django querysets have no order guarantee unless you use order_by. Additionally, PostgreSQL does not have a default order by, and the Postgres docs specifically warn against using limit + offset without order by. As a result, each time you take a slice, you are getting a non-deterministic slice of your table, which means your slices may not be overlapping and won't cover all rows of the table between them. In my experience, this only happens if something else is modifying data in the table while you are doing the iteration, which only makes this problem more pernicious because it means the bug might not show up if you are testing your code in isolation.
def very_bad_iterator(queryset, page_size=1000):
counter = 0
count = queryset.count()
while counter < count:
for model in queryset[counter:counter+page_size].iterator():
yield model
counter += page_size
assert sum(i.id for i in very_bad_iterator(MyModel.objects.all())) == x
Assertion Error; i.e. INCORRECT RESULT COMPUTED!!!
Do NOT use Slicing for Whole-Table Iteration in General
Even if we order the queryset, list slicing is abysmal from a performance perspective. This is because SQL offset is a linear time operation, which means that a limit + offset paged iteration of a table will be quadratic time, which you absolutely do not want.
def bad_iterator(queryset, page_size=1000):
counter = 0
count = queryset.count()
while counter < count:
for model in queryset.order_by("id")[counter:counter+page_size].iterator():
yield model
counter += page_size
assert sum(i.id for i in bad_iterator(MyModel.objects.all())) == x
Wall time: 15s (BAD), <1MB of memory
Do NOT use Django's Paginator for Whole-Table Iteration
Django comes with a built-in Paginator. It may be tempting to think that is appropriate for doing a paged iteration of a database, but it is not. The point of Paginator is for returning a single page of a result to a UI or an API endpoint. It is substantially slower than any of the good apporaches at iterating over a table.
from django.core.paginator import Paginator
def bad_paged_iterator(queryset, page_size=1000):
p = Paginator(queryset.order_by("pk"), page_size)
for i in p.page_range:
yield p.get_page(i)
assert sum(i.id for page in bad_paged_iterator(MyModel.objects.all()) for i in page) == x
Wall time: 13.1 s (BAD), <1MB of memory
The essential answer: use raw SQL with server-side cursors.
Sadly, until Django 1.5.2 there is no formal way to create a server-side MySQL cursor (not sure about other database engines). So I wrote some magic code to solve this problem.
For Django 1.5.2 and MySQLdb 1.2.4, the following code will work. Also, it's well commented.
Caution: This is not based on public APIs, so it will probably break in future Django versions.
# This script should be tested under a Django shell, e.g., ./manage.py shell
from types import MethodType
import MySQLdb.cursors
import MySQLdb.connections
from django.db import connection
from django.db.backends.util import CursorDebugWrapper
def close_sscursor(self):
"""An instance method which replace close() method of the old cursor.
Closing the server-side cursor with the original close() method will be
quite slow and memory-intensive if the large result set was not exhausted,
because fetchall() will be called internally to get the remaining records.
Notice that the close() method is also called when the cursor is garbage
collected.
This method is more efficient on closing the cursor, but if the result set
is not fully iterated, the next cursor created from the same connection
won't work properly. You can avoid this by either (1) close the connection
before creating a new cursor, (2) iterate the result set before closing
the server-side cursor.
"""
if isinstance(self, CursorDebugWrapper):
self.cursor.cursor.connection = None
else:
# This is for CursorWrapper object
self.cursor.connection = None
def get_sscursor(connection, cursorclass=MySQLdb.cursors.SSCursor):
"""Get a server-side MySQL cursor."""
if connection.settings_dict['ENGINE'] != 'django.db.backends.mysql':
raise NotImplementedError('Only MySQL engine is supported')
cursor = connection.cursor()
if isinstance(cursor, CursorDebugWrapper):
# Get the real MySQLdb.connections.Connection object
conn = cursor.cursor.cursor.connection
# Replace the internal client-side cursor with a sever-side cursor
cursor.cursor.cursor = conn.cursor(cursorclass=cursorclass)
else:
# This is for CursorWrapper object
conn = cursor.cursor.connection
cursor.cursor = conn.cursor(cursorclass=cursorclass)
# Replace the old close() method
cursor.close = MethodType(close_sscursor, cursor)
return cursor
# Get the server-side cursor
cursor = get_sscursor(connection)
# Run a query with a large result set. Notice that the memory consumption is low.
cursor.execute('SELECT * FROM million_record_table')
# Fetch a single row, fetchmany() rows or iterate it via "for row in cursor:"
cursor.fetchone()
# You can interrupt the iteration at any time. This calls the new close() method,
# so no warning is shown.
cursor.close()
# Connection must be close to let new cursors work properly. see comments of
# close_sscursor().
connection.close()
There is another option available. It wouldn't make the iteration faster, (in fact it would probably slow it down), but it would make it use far less memory. Depending on your needs this may be appropriate.
large_qs = MyModel.objects.all().values_list("id", flat=True)
for model_id in large_qs:
model_object = MyModel.objects.get(id=model_id)
# do whatever you need to do with the model here
Only the ids are loaded into memory, and the objects are retrieved and discarded as needed. Note the increased database load and slower runtime, both tradeoffs for the reduction in memory usage.
I've used this when running async scheduled tasks on worker instances, for which it doesn't really matter if they are slow, but if they try to use way too much memory they may crash the instance and therefore abort the process.

Need for long and dynamic select query/view sqlite

I have a need to generate a long select query of potentially thousands of where conditions like (table1.a = ? OR table1.a = ? OR ...) AND (table2.b = ? OR table2.b = ? ...) AND....
I initially started building a class to make this more bearable, but have since stopped to wonder if this will work well. This query is going to be hammering a table of potentially 10s of millions of rows joined with 2 more tables with thousands of rows.
A number of concerns are stemming from this:
1.) I wanted to use these statements to generate a temp view so I could easily transfer over existing code base, the point here is I want to filter data that I have down for analysis based on selected parameters in a GUI, so how poorly will a view do in this scenario?
2.) Can sqlite even parse a query with thousands of binds?
3.) Isn't there a framework that can make generating this query easier other than with string concatenation?
4.) Is the better solution to dump all of the WHERE variables into hash sets in memory and then just write a wrapper for my DB query object that gets next() until a query is encountered this satisfies all my conditions? My concern here is, the application generates graphs procedurally on scrolls, so waiting to draw while calling query.next() x 100,000 might cause an annoying delay? Ideally I don't want to have to wait on the next row that satisfies everything for more than 30ms at a time.
edit:
New issue, it came to my attention that sqlite3 is limited to 999 bind values(host parameters) at compile time.
So it seems as if the only way to accomplish what I had originally intended is to
1.) Generate the entire query via string concatenations(my biggest concern being, I don't know how slow parsing all the data inside sqlite3 will be)
or
2.) Do the blanket query method(select * from * where index > ? limit ?) and call next() until I hit what valid data in my compiled code(including updating index variable and re-querying repeatedly)
I did end up writing a wrapper around the QSqlQuery object that will walk a table using index > variable and limit to allow "walking" the table.
Consider dumping the joined results without filters (denormalized) into a flat file and index it with Fastbit, a bitmap index engine.

Providing a LIMIT param in Django query without taking a slice of a QuerySet

I have a utility function in my program for searching for entities. It takes a max_count parameter. It returns a QuerySet.
I would like this function to limit the max number of entries. The standard way would be to take a slice out of my QuerySet:
return results[:max_count]
My problem is that the views which utilize this function sort in various ways by using .order_by(). This causes exceptions as re-ordering is not allowed after taking a slice.
Is it possible to force a "LIMIT 1000" into my SQL query without taking a slice?
Do results[:max_count] in a view, after .order_by(). Don't be afraid of requesting too much from DB, query won't be evaluated until the slice (and even after it either).
You could subclass QuerySet to achieve this, by simply ignore every slice and do [:max_count] at last in __getitem__, but I don't think it worths with the complex and side-effects.
If you are worrying about memory consumption by large queryset, follow http://www.mellowmorning.com/2010/03/03/django-query-set-iterator-for-really-large-querysets/
For normal usage, please just follow DrTyrsa's suggestion. You could write shortcuts to shorten the order_by and afterwards slice in code to simplify your code.