In Django, what is the most efficient way to check for an empty query set? - sql

I've heard suggestions to use the following:
if qs.exists():
...
if qs.count():
...
try:
qs[0]
except IndexError:
...
Copied from comment below: "I'm looking for a statement like "In MySQL and PostgreSQL count() is faster for short queries, exists() is faster for long queries, and use QuerySet[0] when it's likely that you're going to need the first element and you want to check that it exists. However, when count() is faster it's only marginally faster so it's advisable to always use exists() when choosing between the two."

query.exists() is the most efficient way.
Especially on postgres count() can be very expensive, sometimes more expensive then a normal select query.
exists() runs a query with no select_related, field selections or sorting and only fetches a single record. This is much faster then counting the entire query with table joins and sorting.
qs[0] would still includes select_related, field selections and sorting; so it would be more expensive.
The Django source code is here (django/db/models/sql/query.py RawQuery.has_results):
https://github.com/django/django/blob/60e52a047e55bc4cd5a93a8bd4d07baed27e9a22/django/db/models/sql/query.py#L499
def has_results(self, using):
q = self.clone()
if not q.distinct:
q.clear_select_clause()
q.clear_ordering(True)
q.set_limits(high=1)
compiler = q.get_compiler(using=using)
return compiler.has_results()
Another gotcha that got me the other day is invoking a QuerySet in an if statement. That executes and returns the whole query !
If the variable query_set may be None (unset argument to your function) then use:
if query_set is None:
#
not:
if query_set:
# you just hit the database

exists() is generally faster than count(), though not always (see test below). count() can be used to check for both existence and length.
Only use qs[0]if you actually need the object. It's significantly slower if you're just testing for existence.
On Amazon SimpleDB, 400,000 rows:
bare qs: 325.00 usec/pass
qs.exists(): 144.46 usec/pass
qs.count() 144.33 usec/pass
qs[0]: 324.98 usec/pass
On MySQL, 57 rows:
bare qs: 1.07 usec/pass
qs.exists(): 1.21 usec/pass
qs.count(): 1.16 usec/pass
qs[0]: 1.27 usec/pass
I used a random query for each pass to reduce the risk of db-level caching. Test code:
import timeit
base = """
import random
from plum.bacon.models import Session
ip_addr = str(random.randint(0,256))+'.'+str(random.randint(0,256))+'.'+str(random.randint(0,256))+'.'+str(random.randint(0,256))
try:
session = Session.objects.filter(ip=ip_addr)%s
if session:
pass
except:
pass
"""
query_variatons = [
base % "",
base % ".exists()",
base % ".count()",
base % "[0]"
]
for s in query_variatons:
t = timeit.Timer(stmt=s)
print "%.2f usec/pass" % (1000000 * t.timeit(number=100)/100000)

It depends on use context.
According to documentation:
Use QuerySet.count()
...if you only want the count, rather than doing len(queryset).
Use QuerySet.exists()
...if you only want to find out if at least one result exists, rather than if queryset.
But:
Don't overuse count() and exists()
If you are going to need other data from the QuerySet, just evaluate it.
So, I think that QuerySet.exists() is the most recommended way if you just want to check for an empty QuerySet. On the other hand, if you want to use results later, it's better to evaluate it.
I also think that your third option is the most expensive, because you need to retrieve all records just to check if any exists.

#Sam Odio's solution was a decent starting point but there's a few flaws in the methodology, namely:
The random IP address could end up matching 0 or very few results
An exception would skew the results, so we should aim to avoid handling exceptions
So instead of filtering something that might match, I decided to exclude something that definitely won't match, hopefully still avoiding the DB cache, but also ensuring the same number of rows.
I only tested against a local MySQL database, with the dataset:
>>> Session.objects.all().count()
40219
Timing code:
import timeit
base = """
import random
import string
from django.contrib.sessions.models import Session
never_match = ''.join(random.choice(string.ascii_uppercase) for _ in range(10))
sessions = Session.objects.exclude(session_key=never_match){}
if sessions:
pass
"""
s = base.format('count')
query_variations = [
"",
".exists()",
".count()",
"[0]",
]
for variation in query_variations:
t = timeit.Timer(stmt=base.format(variation))
print "{} => {:02f} usec/pass".format(variation.ljust(10), 1000000 * t.timeit(number=100)/100000)
outputs:
=> 1390.177710 usec/pass
.exists() => 2.479579 usec/pass
.count() => 22.426991 usec/pass
[0] => 2.437079 usec/pass
So you can see that count() is roughly 9 times slower than exists() for this dataset.
[0] is also fast, but it needs exception handling.

I would imagine that the first method is the most efficient way (you could easily implement it in terms of the second method, so perhaps they are almost identical). The last one requires actually getting a whole object from the database, so it is almost certainly the most expensive.
But, like all of these questions, the only way to know for your particular database, schema and dataset is to test it yourself.

I was also in this trouble. Yes exists() is faster for most cases but it depends a lot on the type of queryset you are trying to do. For example for a simple query like:
my_objects = MyObject.objets.all() you would use my_objects.exists(). But if you were to do a query like: MyObject.objects.filter(some_attr='anything').exclude(something='what').distinct('key').values() probably you need to test which one fits better (exists(), count(), len(my_objects)). Remember the DB engine is the one who will perform the query, and to get a good result in performance, it depends a lot on the data structure and how the query is formed. One thing you can do is, audit the queries and test them on your own against the DB engine and compare your results you will be surprised by how naive sometimes django is, try QueryCountMiddleware to see all the queries executed, and you will see what i am talking about.

Related

Peewee .first() modifies in-memory lists

I use Peewee as my ORM and while trying to use .first() to get the first element in a list of objects, I found this weird behaviour, where the in-memory list of objects gets modified.
In [7]: events = Event.select()
In [8]: len(events)
Out[8]: 10000 # correct count of events in my local DB
In [9]: first_event = events.first()
In [10]: first_event
Out[10]: <Event: 311718318>
In [11]: len(events)
Out[11]: 1 # the in-memory list of events has somehow been changed
I initially had 10k objects of Event, and just by doing .first(), that in-memory list events gets modified somehow. This seems very weird and I faced a lot of customer issues on production because of this. Thus, I wanted to know if this is some known issue with Peewee, or maybe my understanding isn't right.
Going through my previous experience with Django, .first() simply returns the first element in the list or None, and doesn't do anything to the in-memory list of objects. Why is the behaviour different in Peewee? I couldn't find anything relevant in the documentation.
Yes, Peewee .first() explicitly appends a LIMIT 1 so that multiple invocations of .first() do not result in multiple queries being executed when you only want the first result. This behavior is a bit different than .get(), which will execute a fresh query every time you call it.
You can explicitly clone your query before calling .first() if this behavior is undesirable:
events = Event.select()
first = events.clone().first()
Alternatively, I'd suggest using .get() and catching the Event.DoesNotExist or using plain-old first_event = events[0] and catching the possible IndexError.

Efficient Django query filter first n records

I'm trying to filter for the first (or last) n Message objects which match a certain filter. Right now I need to filter for all matches then slice.
def get(self, request, chat_id, n):
last_n_messages = Message.objects.filter(chat=chat_id).order_by('-id')[:n]
last_n_sorted = reversed(last_n_messages)
serializer = MessageSerializer(last_n_sorted, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
This is clearly not efficient. Is there a way to get the first (or last) n items without exhaustively loading every match?
I found the answer myself. Basically the [:n] puts a SQL "LIMIT" in the query which makes the query itself NOT exhaustive. So it's fine in terms of efficiency.
Link for similar answer below.
https://stackoverflow.com/a/6574137/4775212
If the limit is known, it can also be using range.
Message.objects.filter(chat__range=[chat_id, chat_id+n]).order_by('-id')

SHOW KEYS in Aerospike?

I'm new to Aerospike and am probably missing something fundamental, but I'm trying to see an enumeration of the Keys in a Set (I'm purposefully avoiding the word "list" because it's a datatype).
For example,
To see all the Namespaces, the docs say to use SHOW NAMESPACES
To see all the Sets, we can use SHOW SETS
If I want to see all the unique Keys in a Set ... what command can I use?
It seems like one can use client.scan() ... but that seems like a super heavy way to get just the key (since it fetches all the bin data as well).
Any recommendations are appreciated! As of right now, I'm thinking of inserting (deleting) into (from) a meta-record.
Thank you #pgupta for pointing me in the right direction.
This actually has two parts:
In order to retrieve original keys from the server, one must -- during put() calls -- set policy to save the key value server-side (otherwise, it seems only a digest/hash is stored?).
Here's an example in Python:
aerospike_client.put(key, {'bin': 'value'}, policy={'key': aerospike.POLICY_KEY_SEND})
Then (modified Aerospike's own documentation), you perform a scan and set the policy to not return the bin data. From this, you can extract the keys:
Example:
keys = []
scan = client.scan('namespace', 'set')
scan_opts = { 'concurrent': True, 'nobins': True, 'priority': aerospike.SCAN_PRIORITY_MEDIUM }
for x in (scan.results(policy=scan_opts)): keys.append(x[0][2])
The need to iterate over the result still seems a little clunky to me; I still think that using a 'master-key' Record to store a list of all the other keys will be more performant, in my case -- in this way, I can simply make one get() call to the Aerospike server to retrieve the list.
You can choose not bring the data back by setting includeBinData in ScanPolicy to false.

Django: get result of raw SQL query in one line

This works:
connection = get_connection()
cursor=connection.cursor()
cursor.execute('show application_name')
application_name_of_connection=cursor.fetchone()[0]
But why four lines? Is there no way to get this in one line?
No, there is not a way to do this in one line. Languages such as Python are designed to describe a set of instructions. There are four instructions here. There may even be helper methods or clever/inefficient arrangements of these statements that could lower this to three lines, but you are best to keep it how it is.
For example, if you are for some reason using this code frequently, you would encapsulate it in its own method
def get_application_name_of_connection()
connection = get_connection()
cursor=connection.cursor()
cursor.execute('show application_name')
return cursor.fetchone()[0]
And then simply:
get_application_name_of_connection()
This is how it works. You want less code? Hide the functionality.
It can be done with comprehension but on sqlite:
results = [row[0] for row in get_connection().cursor().execute('select * from mytable')]
If you are using postgresql then it doesn't work, you could try this for a more compact form:
with get_connection().cursor() as cur:
cur.execute('show all')
print cur.fetchone()
But only if you need the cursor thrown away after the block execution.
I didn't tried on other db managers, results may vary.

Getting N related models given M models

Given two models:
class Post(models.Model):
...
class Comment(models.Model):
post = models.ForeignKey(Post)
...
Is there a good way to get the last 3 Comments on a set of Posts (i.e. in a single roundtrip to the DB instead of once per post)? A naive implementation to show what I mean:
for post in Post.objects.filter(id__in=some_post_ids):
post.latest_comments = list(Comment.objects.filter(post=post).order_by('-id')[:3])
Given some_post_ids == [1, 2], the above will result in 3 queries:
[{'sql': 'SELECT "myapp_post"."id" FROM "myapp_post" WHERE "myapp_post"."id" IN (1, 2)', 'time': '0.001'},
{'sql': 'SELECT "myapp_comment"."id", "myapp_comment"."post_id" FROM "myapp_comment" WHERE "myapp_comment"."post_id" = 1 LIMIT 3', 'time': '0.001'},
{'sql': 'SELECT "myapp_comment"."id", "myapp_comment"."post_id" FROM "myapp_comment" WHERE "myapp_comment"."post_id" = 2 LIMIT 3', 'time': '0.001'}]
From Django's docs:
Slicing. As explained in Limiting QuerySets, a QuerySet can be sliced, using Python’s array-slicing syntax. Slicing an unevaluated QuerySet usually returns another unevaluated QuerySet, but Django will execute the database query if you use the “step” parameter of slice syntax, and will return a list. Slicing a QuerySet that has been evaluated (partially or fully) also returns a list.
Your naive implementation is correct and should only make one DB query. However, don't call list on it, I believe that will cause the DB to be hit immediately (although it should still only be a single query). The queryset it already iterable and there really shouldn't be any need to call list. More on calling list from the same doc page:
list(). Force evaluation of a QuerySet by calling list() on it. For example:
entry_list = list(Entry.objects.all())
Be warned, though, that this could have a large memory overhead, because Django will load each element of the list into memory. In contrast, iterating over a QuerySet will take advantage of your database to load data and instantiate objects only as you need them.
UPDATE:
With your added explanation I believe the following should work (however, it's untested so report back!):
post.latest_comments = Comment.objects.filter(post__in=some_post_ids).order_by('-id')
Admittedly it doesn't do the limit of 3 comments per post, I'm sure that's possible but can't think of the syntax off the top of my head. Also, remember you can always do a manual query on any Model to get better optimisation, so you can run Comment.query("Select ...;")
Given the information here on the "select top N from group" problem, if you're IN clause will be a small number of posts, it may just be cheaper to either a) do the multiple queries or b) select all comments for the posts then filter in Python. I'd suggest using a if it's a small number of posts with lots of comments and b if there will be relatively few comments per post.