Path references in QueryDSL join to subquery - sql

I'd like to know the best way to implement this query in SQL-style QueryDSL which joins to a subquery. I struggled a bit, but got it to generate the necessary SQL. I'm wondering if there are any simplifications/improvements, however, particularly related to the three "paths" I had to create? For example, would be great to define latestCaseId in terms of latestSubQuery.
In the simplified form of my actual query below, I am finding the set of records (fields spread across ucm and pcm) which have the latest timestamp per case group. The subquery identifies the latest timestamp per group so that we can filter the outer query by it.
final SimplePath<ListSubQuery> latestSubQueryPath = Expressions.path(ListSubQuery.class, "latest");
final SimplePath<Timestamp> caseLatestMentioned = Expressions.path(Timestamp.class, "caseLatestMentioned");
final SimplePath<Integer> latestCaseId = Expressions.path(Integer.class, "latest.caseId");
final ListSubQuery<Tuple> latest = new SQLSubQuery()
.from(ucm2)
.innerJoin(pcm2).on(ucm2.id.eq(pcm2.id))
.groupBy(pcm2.caseId)
.list(pcm2.caseId.as(latestCaseId), ucm2.lastExtracted.max().as(caseLatestMentioned));
q.from(ucm)
.join(pcm).on(ucm.id.eq(pcm.id))
.innerJoin(latest, latestSubQueryPath).on(pcm.caseId.eq(latestCaseId))
.where(ucm.lastExtracted.eq(caseLatestMentioned));

I believe you could use the .get(<various Path impls>) method of PathBuilder. The way I like to think of it is that creating final PathBuilder<Tuple> latestSubQueryPath = new PathBuilder<>(Tuple.class, "latest") and joining to it .innerJoin(latest, latestSubQueryPath) is creating an alias for the subquery. Then you can use .get(<various Path impls>) to access the fields as follows:
q.from(ucm)
.join(pcm).on(ucm.id.eq(pcm.id))
.innerJoin(latest, latestSubQueryPath).on(pcm.caseId.eq(latestSubQueryPath.get(pcm2.caseId)))
.where(ucm.lastExtracted.eq(latestSubQueryPath.get(maxLastExtractedDate)));
I've not run the code but hopefully this is in the right direction. If not, I'll have a look tomorrow when I have the relevant codebase to hand.
Update: As mentioned in the comments, ucm2.lastExtracted.max() requires an alias. I've called it maxLastExtractedDate and assume it's used to alias ucm2.lastExtracted.max() when creating the subquery.

Related

Access Query link two tables with similar values

I am trying to create a select query in access with two tables I want to link/create a relationship.
Normally, if both tables contains same value you can just "drag" and create a link between those two columns.
In this case however, the second table have an " /CUSTOMER" added at the end in the fields.
Example;
Table1.OrderNumber contains order numbers which always contains 10 characters
Table2.Refference contains same order numbers, but will have a " /CUSTOMER" added to the end.
Can I link/create a relationship between these two in a Query? And how?
Thanks for the help!
Sebastian
Table1.OrderNumber contains order numbers which always contains 10 characters
If so, try this join:
ON Table1.OrderNumber = Left(Table2.Reference, 10)
For these nuanced joins you will have to use SQL and not design view with diagram. Consider the following steps in MS Access:
In Design view, create the join as if two customer fields match exactly. Then run the query which as you point out should return no results.
In SQL view, find the ON clause and adjust to replace that string. Specifically, change this clause
ON Table1.OrderNumber = Table2.Refference
To this clause:
ON Table1.OrderNumber = REPLACE(Table2.Refference, '/CUSTOMER', '')
Then run query to see results.
Do note: with this above change, you may get an Access warning when trying to open query in Design View since it may not be able to be visualized. Should you ignore the warning, above SQL change may be reverted. Therefore, make any changes to query only in SQL view.
Alternatively (arguably better solution), consider cleaning out that string using UPDATE query on the source table so the original join can work. Any change to avoid complexity is an ideal approach. Run below SQL query only one time:
UPDATE Table2
SET Refference = REPLACE(Refference, '/CUSTOMER', '')

Django ORM filter multiple fields using 'IN' statement

So I have the following model in Django:
class MemberLoyalty(models.Model):
date_time = models.DateField(primary_key=True)
member = models.ForeignKey(Member, models.DO_NOTHING)
loyalty_value = models.IntegerField()
My goal is to have all the tuples grouped by the member with the most recent date. There are many ways to do it, one of them is using a subquery that groups by the member with max date_time and filtering member_loyalty with its results. The working sql for this solution is as follows:
SELECT
*
FROM
member_loyalty
WHERE
(date_time , member_id) IN (SELECT
max(date_time), member_id
FROM
member_loyalty
GROUP BY member_id);
Another way to do this would be by joining with the subquery.
How could i translate this on a django query? I could not find a way to filter with two fields using IN, nor a way to join with a subquery using a specific ON statement.
I've tried:
cls.objects.values('member_id', 'loyalty_value').annotate(latest_date=Max('date_time'))
But it starts grouping by the loyalty_value.
Also tried building the subquery, but cant find how to join it or use it on a filter:
subquery = cls.objects.values('member_id').annotate(max_date=Max('date_time'))
Also, I am using Mysql so I can not make use of the .distinct('param') method.
This is a typical greatest-per-group query. Stack-overflow even has a tag for it.
I believe the most efficient way to do it with the recent versions of Django is via a window query. Something along the lines should do the trick.
MemberLoyalty.objects.all().annotate(my_max=Window(
expression=Max('date_time'),
partition_by=F('member')
)).filter(my_max=F('date_time'))
Update: This actually won't work, because Window annotations are not filterable. I think in order to filter on window annotation you need to wrap it inside a Subquery, but with Subquery you are actually not obligated to use a Window function, there is another way to do it, which is my next example.
If either MySQL or Django does not support window queries, then a Subquery comes into play.
MemberLoyalty.objects.filter(
date_time=Subquery(
(MemberLoyalty.objects
.filter(member=OuterRef('member'))
.values('member')
.annotate(max_date=Max('date_time'))
.values('max_date')[:1]
)
)
)
If event Subqueries are not available (pre Django 1.11) then this should also work:
MemberLoyalty.objects.annotate(
max_date=Max('member__memberloyalty_set__date_time')
).filter(max_date=F('date_time'))

How to simulate ActiveRecord Model.count.to_sql

I want to display the SQL used in a count. However, Model.count.to_sql will not work because count returns a FixNum that doesn't have a to_sql method. I think the simplest solution is to do this:
Model.where(nil).to_sql.sub(/SELECT.*FROM/, "SELECT COUNT(*) FROM")
This creates the same SQL as is used in Model.count, but is it going to cause a problem further down the line? For example, if I add a complicated where clause and some joins.
Is there a better way of doing this?
You can try
Model.select("count(*) as model_count").to_sql
You may want to dip into Arel:
Model.select(Arel.star.count).to_sql
ASIDE:
I find I often want to find sub counts, so I embed the count(*) into another query:
child_counts = ChildModel.select(Arel.star.count)
.where(Model.arel_attribute(:id).eq(
ChildModel.arel_attribute(:model_id)))
Model.select(Arel.star).select(child_counts.as("child_count"))
.order(:id).limit(10).to_sql
which then gives you all the child counts for each of the models:
SELECT *,
(
SELECT COUNT(*)
FROM "child_models"
WHERE "models"."id" = "child_models"."model_id"
) child_count
FROM "models"
ORDER BY "models"."id" ASC
LIMIT 10
Best of luck
UPDATE:
Not sure if you are trying to solve this in a generic way or not. Also not sure what kind of scopes you are using on your Model.
We do have a method that automatically calls a count for a query that is put into the ui layer. I found using count(:all) is more stable than the simple count, but sounds like that does not overlap your use case. Maybe you can improve your solution using the except clause that we use:
scope.except(:select, :includes, :references, :offset, :limit, :order)
.count(:all)
The where clause and the joins necessary for the where clause work just fine for us. We tend to want to keep the joins and where clause since that needs to be part of the count. While you definitely want to remove the includes (which should be removed by rails automatically in my opinion), but the references (much trickier especially in the case where it references a has_many and requires a distinct) that starts to throw a wrench in there. If you need to use references, you may be able to convert these over to a left_join.
You may want to double check the parameters that these "join" methods take. Some of them take table names and others take relation names. Later rails version have gotten better and take relation names - be sure you are looking at the docs for the right version of rails.
Also, in our case, we spend more time trying to get sub selects with more complicated relationships, we have to do some munging. Looks like we are not dealing with where clauses as much.
ref2

Placing index columns on the left of a mysql WHERE statement?

I was curious since i read it in a doc. Does writing
select * from CONTACTS where id = ‘098’ and name like ‘Tom%’;
speed up the query as oppose to
select * from CONTACTS where name like ‘Tom%’ and id = ‘098’;
The first has an indexed column on the left side. Does it actually speed things up or is it superstition?
Using php and mysql
Check the query plans with explain. They should be exactly the same.
This is purely superstition. I see no reason that either query would differ in speed. If it was an OR query rather than an AND query however, then I could see that having it on the left may spped things up.
interesting question, i tried this once. query plans are the same (using EXPLAIN).
but considering short-circuit-evaluation i was wondering too why there is no difference (or does mysql fully evaluate boolean statements?)
You may be mis-remembering or mis-reading something else, regarding which side the wildcards are on a string literal in a Like predicate. Putting the wildcard on the right (as in yr example), allows the query engine to use any indices that might exist on the table column you are searching (in this case - name). But if you put the wildcard on the left,
select * from CONTACTS where name like ‘%Tom’ and id = ‘098’;
then the engine cannot use any existing index and must do a complete table scan.

Django - finding the extreme member of each group

I've been playing around with the new aggregation functionality in the Django ORM, and there's a class of problem I think should be possible, but I can't seem to get it to work. The type of query I'm trying to generate is described here.
So, let's say I have the following models -
class ContactGroup(models.Model):
.... whatever ....
class Contact(models.Model):
group = models.ForeignKey(ContactGroup)
name = models.CharField(max_length=20)
email = models.EmailField()
...
class Record(models.Model):
contact = models.ForeignKey(Contact)
group = models.ForeignKey(ContactGroup)
record_date = models.DateTimeField(default=datetime.datetime.now)
... name, email, and other fields that are in Contact ...
So, each time a Contact is created or modified, a new Record is created that saves the information as it appears in the contact at that time, along with a timestamp. Now, I want a query that, for example, returns the most recent Record instance for every Contact associated to a ContactGroup. In pseudo-code:
group = ContactGroup.objects.get(...)
records_i_want = group.record_set.most_recent_record_for_every_contact()
Once I get this figured out, I just want to be able to throw a filter(record_date__lt=some_date) on the queryset, and get the information as it existed at some_date.
Anybody have any ideas?
edit: It seems I'm not really making myself clear. Using models like these, I want a way to do the following with pure django ORM (no extra()):
ContactGroup.record_set.extra(where=["history_date = (select max(history_date) from app_record r where r.id=app_record.id and r.history_date <= '2009-07-18')"])
Putting the subquery in the where clause is only one strategy for solving this problem, the others are pretty well covered by the first link I gave above. I know where-clause subselects are not possible without using extra(), but I thought perhaps one of the other ways was made possible by the new aggregation features.
It sounds like you want to keep records of changes to objects in Django.
Pro Django has a section in chapter 11 (Enhancing Applications) in which the author shows how to create a model that uses another model as a client that it tracks for inserts/deletes/updates.The model is generated dynamically from the client definition and relies on signals. The code shows most_recent() function but you could adapt this to obtain the object state on a particular date.
I assume it is the tracking in Django that is problematic, not the SQL to obtain this, right?
First of all, I'll point out that:
ContactGroup.record_set.extra(where=["history_date = (select max(history_date) from app_record r where r.id=app_record.id and r.history_date <= '2009-07-18')"])
will not get you the same effect as:
records_i_want = group.record_set.most_recent_record_for_every_contact()
The first query returns every record associated with a particular group (or associated with any of the contacts of a particular group) that has a record_date less than the date/ time specified in the extra. Run this on the shell and then do this to review the query django created:
from django.db import connection
connection.queries[-1]
which reveals:
'SELECT "contacts_record"."id", "contacts_record"."contact_id", "contacts_record"."group_id", "contacts_record"."record_date", "contacts_record"."name", "contacts_record"."email" FROM "contacts_record" WHERE "contacts_record"."group_id" = 1 AND record_date = (select max(record_date) from contacts_record r where r.id=contacts_record.id and r.record_date <= \'2009-07-18\')
Not exactly what you want, right?
Now the aggregation feature is used to retrieve aggregated data and not objects associated with aggregated data. So if you're trying to minimize number of queries executed using aggregation when trying to obtain group.record_set.most_recent_record_for_every_contact() you won't succeed.
Without using aggregation, you can get the most recent record for all contacts associated with a group using:
[x.record_set.all().order_by('-record_date')[0] for x in group.contact_set.all()]
Using aggregation, the closest I could get to that was:
group.record_set.values('contact').annotate(latest_date=Max('record_date'))
The latter returns a list of dictionaries like:
[{'contact': 1, 'latest_date': somedate }, {'contact': 2, 'latest_date': somedate }]
So one entry for for each contact in a given group and the latest record date associated with it.
Anyway, the minimum query number is probably 1 + # of contacts in a group. If you are interested obtaining the result using a single query, that is also possible, but you'll have to construct your models in a different way. But that's a totally different aspect of your problem.
I hope this will help you understand how to approach the problem using aggregation/ the regular ORM functions.