I have two datasets that I'm trying to consolidate to represent all of the unique touch points for a given user. I've gotten as far as using ARRAY_AGG to aggregate everything down to a single session identifier, but now I want to consolidate the identifiers themselves and am stuck.
The source data looks like this:
Session_GUID
User_GUID
Interaction_GUID
Session_1
User_1
Interact_A
Session_1
User_1
Interact_B
Session_1
User_2
Interact_C
Session_2
User_2
Interact_D
Session_3
User_3
Interact_C
Session_4
User_4
Interact_E
And I've aggregated it down with a simple
SELECT
Session,
ARRAY_AGG(DISTINCT User_GUID),
ARRAY_AGG(DISTINCT Interaction_GUID)
FROM
source_table
GROUP BY
Session
Which gets me here:
Session
User_GUID_Array
Interaction_GUID_Array
Session_1
[ User_1, User_2 ]
[ Interact_A, Interact_B, Interact_C ]
Session_2
[ User_2 ]
[ Interact_D ]
Session_3
[ User_3 ]
[ Interact_C ]
Session_4
[ User_4 ]
[ Interact_E ]
Now I'd like to aggregate everything based on matches in either of the two arrays.
So from the above, Session_1 and Session_2 get grouped together based on User_GUID matches, and Session_3 gets added too based on Interaction_GUID matches.
This seems like it should be do-able based on some sort of "do another ARRAY_AGG if these intersect/overlap conditions are met" logic. But I'm at the limits of my SQL knowledge and haven't been able to figure it out.
The end result I'm looking for is this:
Session_Array
User_GUID_Array
Interaction_GUID_Array
[ Sessionion_1, Session_2, Session_3 ]
[ User_1, User_2, User_3 ]
[ Interact_A, Interact_B, Interact_C, Interact_D ]
[ Session_4 ]
[ User_4 ]
[ Interact_E ]
Grouping by more than one column usually requires a recursive CTE, but in this case the grouping is by array intersection. One way to accomplish this is with a user defined table function that maintains two two-dimensional arrays, one for each column. As a row goes through, the function checks to see if it's seen the values before (in either of the two two-dimensional arrays). If it has seen at least one value in one of the groups, it returns the group number. A CTE can then use those group numbers for the array_union aggregation.
This approach will only work for small partitions. In this example the partition is "1", which means the entire table. If the table is large, the UDTF will run out of memory. This approach requires a partition such as a date, ID of some sort, etc. that keeps the partitions small (a few thousand rows perhaps). If the partitions are significantly larger than that, this approach won't work.
create or replace function GROUP_ANY(ARR1 array, ARR2 array)
returns table(GROUP_NUMBER float)
language javascript
as
$$
{
initialize: function (argumentInfo, context) {
this.isInitialized = false;
this.groupNumber = 0;
this.arr1 = [1];
this.arr2 = [1];
},
processRow: function (row, rowWriter, context) {
var arraysIntersect;
var g;
if(!this.isInitialized) {
this.isInitialized = true;
this.arr1[0] = row.ARR1;
this.arr2[0] = row.ARR2;
} else {
arraysIntersect = false;
for (g=0; g<=this.groupNumber; g++) {
if(arraysOverlap(this.arr1[g], row.ARR1) || arraysOverlap(this.arr2[g], row.ARR2)) {
this.arr1[g] = this.arr1[g].concat(row.ARR1);
this.arr2[g] = this.arr2[g].concat(row.ARR2);
arraysIntersect = true;
}
if (arraysIntersect) {
break;
}
}
if(!arraysIntersect){
this.arr1.push(row.ARR1);
this.arr2.push(row.ARR2);
this.groupNumber++;
}
}
if (arraysIntersect) {
rowWriter.writeRow({GROUP_NUMBER:g});
} else {
rowWriter.writeRow({GROUP_NUMBER:this.groupNumber});
}
function arraysOverlap(arr1, arr2) {
return arr1.some(r=> arr2.includes(r));
}
},
finalize: function (rowWriter, context) {/*...*/},
}
$$;
create or replace table T1(Session_GUID string, User_GUID string, Interaction_GUID string);
insert into T1 (Session_GUID, User_GUID, Interaction_GUID) values
('Session_1', 'User_1', 'Interact_A'),
('Session_1', 'User_1', 'Interact_B'),
('Session_1', 'User_2', 'Interact_C'),
('Session_2', 'User_2', 'Interact_D'),
('Session_3', 'User_3', 'Interact_C'),
('Session_4', 'User_4', 'Interact_E'),
('Session_5', 'User_5', 'Interact_F'),
('Session_6', 'User_4', 'Interact_G'),
('Session_7', 'User_6', 'Interact_E'),
('Session_8', 'User_8', 'Interact_H')
;
with SESSIONS as
(
select Session_GUID
,array_unique_agg(User_GUID) USER_GUID
,array_unique_agg(Interaction_GUID) INTERACTION_GUID
from T1
group by Session_GUID
), GROUPS as
(
select * from SESSIONS, table(group_any(USER_GUID, INTERACTION_GUID)
over (partition by 1 order by SESSION_GUID ))
)
select array_agg(SESSION_GUID) SESSION_GUIDS
,array_union_agg(USER_GUID) USER_GUIDS
,array_union_agg(INTERACTION_GUID) INTERACTION_GUIDS
from GROUPS
group by GROUP_NUMBER
;
Output:
SESSION_GUIDS
USER_GUIDS
INTERACTION_GUIDS
[ "Session_5" ]
[ "User_5" ]
[ "Interact_F" ]
[ "Session_1", "Session_2", "Session_3" ]
[ "User_1", "User_2", "User_3" ]
[ "Interact_A", "Interact_B", "Interact_C", "Interact_D" ]
[ "Session_8" ]
[ "User_8" ]
[ "Interact_H" ]
[ "Session_4", "Session_6", "Session_7" ]
[ "User_4", "User_6" ]
[ "Interact_E", "Interact_G" ]
I try to improve my understanding of FaunaDB.
I have a collection that contains records like:
{
"ref": Ref(Collection("regions"), "261442015390073344"),
"ts": 1587576285055000,
"data": {
"name": "italy",
"attributes": {
"amenities": {
"camping": 1,
"swimming": 7,
"hiking": 3,
"culture": 7,
"nightlife": 10,
"budget": 6
}
}
}
}
I would like to query in a flexible way by different attributes like:
data.attributes.amenities.camping > 5
data.attributes.amenities.camping > 5 AND data.attributes.amenities.hiking > 6
data.attributes.amenities.camping < 6 AND data.attributes.amenities.culture > 6 AND hiking > 5 AND ...
I created an index containing all attributes, but I don't know how to do greater equals filtering in an index that contains multiple terms.
My fallback would be to create an index for each attribute and use Intersection to get the records that are in all subqueries that I want to check, but this feels somehow wrong:
The query: budget >= 6 AND camping >=8 would be:
Index:
{
name: "all_regions_by_all_attributes",
unique: false,
serialized: true,
source: "regions",
terms: [],
values: [
{
field: ["data", "attributes", "amenities", "culture"]
},
{
field: ["data", "attributes", "amenities", "hiking"]
},
{
field: ["data", "attributes", "amenities", "swimming"]
},
{
field: ["data", "attributes", "amenities", "budget"]
},
{
field: ["data", "attributes", "amenities", "nightlife"]
},
{
field: ["data", "attributes", "amenities", "camping"]
},
{
field: ["ref"]
}
]
}
Query:
Map(
Paginate(
Intersection(
Range(Match(Index("all_regions_by_all_attributes")), [0, 0, 0, 6, 0, 8], [10, 10, 10, 10, 10, 10]),
)
),
Lambda(
["culture", "hiking", "swimming", "budget", "nightlife", "camping", "ref"],
Get(Var("ref"))
)
)
This approach has the following disadvantages:
It does not work like expected, if for example the first (culture) attribute is in this range, but the second (hiking) not, then I would still get a return values
It causes a lot of reads due to the reference that I need to follow for each result.
Is it possible to store all values in this kind of index that would contain all the data? I know I can just add more values to the index and access them. But this would mean I have to create a new index as soon as we add more fields to the entity. But maybe this is a common thing.
thanks in advance
Thanks for your question. Ben already wrote out a complete example that shows what you can do and I'll base myself on his recommendations and try to clarify further.
FaunaDB's FQL is quite powerful which means there are multiple ways to do that, yet with such power comes a small learning curve so I'm happy to help :). The reason it took a while to answer this question is that such an elaborate answer actually deserves a complete blog post. Well, I've never written a blog post in Stack Overflow, there is a first for everything!
There are three ways to do 'compound range-like queries' but there is one way that will be most performant for your use-case and we'll see that the first approach is actually not entirely what you need. Spoiler, the third option we describe here is what you need.
Preparation - Let's throw in some data just like Ben did
I'll keep it in one collection to keep it simpler and am using the JavaScript flavour of the Fauna Query Language here. There is a good reason to separate data in a second collection though which is related to your second map/get question (see the end of this answer)
Create the collection
CreateCollection({ name: 'place' })
Throw in some data
Do(
Select(
['ref'],
Create(Collection('place'), {
data: {
name: 'mullion',
focus: 'team-building',
camping: 1,
swimming: 7,
hiking: 3,
culture: 7,
nightlife: 10,
budget: 6
}
})
),
Select(
['ref'],
Create(Collection('place'), {
data: {
name: 'church covet',
focus: 'private',
camping: 1,
swimming: 7,
hiking: 9,
culture: 7,
nightlife: 10,
budget: 6
}
})
),
Select(
['ref'],
Create(Collection('place'), {
data: {
name: 'the great outdoors',
focus: 'private',
camping: 5,
swimming: 3,
hiking: 2,
culture: 1,
nightlife: 9,
budget: 3
}
})
)
)
OPTION 1: Composite indexes with multiple values
We can put as many terms as values in an index and use Match and Range to query those. However! Range probably gives you something different than you would expect if you use multiple values. Range gives you exactly what the index does and the index sorts values lexically. If we look at the example of Range in the docs we see an example there which we can extend upon for multiple values.
Imagine we would have an index with two values and we write:
Range(Match(Index('people_by_age_first')), [80, 'Leslie'], [92, 'Marvin'])
Then the result will be what you see on the left and not what you see on the right. This is a very scalable behaviour and exposes the raw-power without overhead of the underlying index but is not exactly what you are looking for!
So let's move on to another solution!
OPTION 2: First Range, then Filter
Another quite flexible solution is to use Range and then Filter. This however is a less good idea in case you are filtering out a lot with filter since your pages will become more empty. Imagine that you have 10 items in a page after the 'Range' and use filter, then you will end up with pages of 2, 5, 4 elements depending on what is filtered out. This is a great idea however if one of these properties has such a high cardinality that it will filter out most of entities. E.g. imagine everything is timestamped, you want to first get a date range and then continue filtering something that will only eliminate a small percentage of the resultset. I believe that in your case all of these values are quite equal so this the third solution (see lower) will be the best for you.
We could in this case just throw all values in so that they all get returned which avoids a Get. For example, let's say that 'camping' is our most important filter.
CreateIndex({
name: 'all_camping_first',
source: Collection('place'),
values: [
{ field: ['data', 'camping'] },
// and the rest will not be used for filter
// but we want to return them to avoid Map/Get
{ field: ['data', 'swimming'] },
{ field: ['data', 'hiking'] },
{ field: ['data', 'culture'] },
{ field: ['data', 'nightlife'] },
{ field: ['data', 'budget'] },
{ field: ['data', 'name'] },
{ field: ['data', 'focus'] },
]
})
You can now write a query that just gets a range based on the camping value:
Paginate(Range(Match('all_camping_first'), [1], [3]))
Which should return two elements (the third has camping === 5)
Now imagine that we want to filter over these and we set our pages small to avoid unnecessary work
Filter(
Paginate(Range(Match('all_camping_first'), [1], [3]), { size: 2 }),
Lambda(
['camping', 'swimming', 'hiking', 'culture', 'nightlife', 'budget', 'name', 'focus'],
And(GTE(Var('hiking'), 0), GTE(7, Var('hiking')))
)
)
Since I want to be clear on both the advantages as disadvantages of each approach, let's show exactly how filter works by adding another one that has attributes that match our query.
Create(Collection('place'), {
data: {
name: 'the safari',
focus: 'team-building',
camping: 1,
swimming: 9,
hiking: 2,
culture: 4,
nightlife: 3,
budget: 10
}
})
Running the same query:
Filter(
Paginate(Range(Match('all_camping_first'), [1], [3]), { size: 2 }),
Lambda(
['camping', 'swimming', 'hiking', 'culture', 'nightlife', 'budget', 'name', 'focus'],
And(GTE(Var('hiking'), 0), GTE(7, Var('hiking')))
)
)
Now still returns only one value but provides you with an 'after' cursor that points to the next page. You might think: "huh? My page size was 2?". Well that's because Filter works after Pagination and your page originally had two entities from which one got filtered out. So you are left with a page of 1 value and a pointer to the next page.
{
"after": [
...
],
"data": [
[
1,
7,
3,
7,
10,
6,
"mullion",
"team-building"
]
]
You could also opt to Filter directly on the SetRef as well and only paginate afterwards. In that case, the size of your pages will contain the required size. However, keep in mind that this is an O(n) operation on the amount of elements that comes back from Range. Range uses an index but from the moment you use Filter, it will loop over each of the elements.
OPTION 3: Indexes on one value + Intersections!
This is the best solution for your use-case but it requires a bit more understanding and an intermediate index.
When we look at the doc examples for intersection we see this example:
Paginate(
Intersection(
Match(q.Index('spells_by_element'), 'fire'),
Match(q.Index('spells_by_element'), 'water'),
)
)
This works because it's two times the same index and that means that **the results are similar values ** (references in this case).
Let's say we add a few indexes.
CreateIndex({
name: 'by_camping',
source: Collection('place'),
values: [
{ field: ['data', 'camping']}, {field: ['ref']}
]
})
CreateIndex({
name: 'by_swimming',
source: Collection('place'),
values: [
{ field: ['data', 'swimming']}, {field: ['ref']}
]
})
CreateIndex({
name: 'by_hiking',
source: Collection('place'),
values: [
{ field: ['data', 'hiking']}, {field: ['ref']}
]
})
We can intersect on them now but it will not give us the right result. For example... let's call this:
Paginate(
Intersection(
Range(Match(Index("by_camping")), [3], []),
Range(Match(Index("by_swimming")), [3], [])
)
)
The result is empty. Although we had one with swimming 3 and camping 5.
That is exactly the problem. If swimming and camping were both the same value we would get a result. So it's important to notice that Intersection intersects the values, so that includes both the camping/swimming value as well as the reference. That means that we have to drop the value since we only need the reference. The way to do that before pagination is with a join, Essentially we are going to join with another index that is going to just.. return the ref (not specifying values defaults to only the ref)
CreateIndex({
name: 'ref_by_ref',
source: Collection('place'),
terms: [{field: ['ref']}]
})
This join looks as follows
Paginate(Join(
Range(Match(Index('by_camping')), [4], [9]),
Lambda(['value', 'ref'], Match(Index('ref_by_ref'), Var('ref'))
)))
Here we just took the result of Match(Index('by_camping')) and just dropped the value by joining with an index that only returns the ref. Now let's combine this and just do an AND kind of range query ;)
Paginate(Intersection(
Join(
Range(Match(Index('by_camping')), [1], [3]),
Lambda(['value', 'ref'], Match(Index('ref_by_ref'), Var('ref'))
)),
Join(
Range(Match(Index('by_hiking')), [0], [7]),
Lambda(['value', 'ref'], Match(Index('ref_by_ref'), Var('ref'))
))
))
The result is two values, and both in the same page!
Note that you can easily extend or compose FQL by just using the native language (in this case JS) to make this look much nicer (note I didn't test this piece of code)
const DropAllButRef = function(RangeMatch) {
return Join(
RangeMatch,
Lambda(['value', 'ref'], Match(Index('ref_by_ref'), Var('ref'))
))
}
Paginate(Intersection(
DropAllButRef (Range(Match(Index('by_camping')), [1], [3])),
DropAllButRef (Range(Match(Index('by_hiking')), [0], [7]))
))
And a final extension, this only returns indexes so you'll need to map get. There is of course a way around this if you really want to by.. just using another index :)
const index = CreateIndex({
name: 'all_values_by_ref',
source: Collection('place'),
values: [
{ field: ['data', 'camping'] },
{ field: ['data', 'swimming'] },
{ field: ['data', 'hiking'] },
{ field: ['data', 'culture'] },
{ field: ['data', 'nightlife'] },
{ field: ['data', 'budget'] },
{ field: ['data', 'name'] },
{ field: ['data', 'focus'] }
],
terms: [
{ field: ['ref'] }
]
})
Now you have the range query, will get everything without a map/get:
Paginate(
Intersection(
Join(
Range(Match(Index('by_camping')), [1], [3]),
Lambda(['value', 'ref'], Match(Index('all_values_by_ref'), Var('ref'))
)),
Join(
Range(Match(Index('by_hiking')), [0], [7]),
Lambda(['value', 'ref'], Match(Index('all_values_by_ref'), Var('ref'))
))
)
)
With this join approach you could even do range indexes on different collections as long as you join them to the same reference before intersecting! Pretty cool huh?
Can I store more values in the index?
Yes you can, indexes in FaunaDB are views, so let's call them indiviews. It's a tradeoff, essentially you are exchanging compute for storage. By making a view with many values you get very fast access to a certain subset of your data. But there is another tradeoff and that is flexibility. You can not just go adding elements since that would require you to rewrite your whole index. In that case you will have to make a new index and wait for it to build if you have much data (and yes, that is quite common) and make sure that the queries you do (look at the lambda parameters in map filter) match your new index. You can always delete the other index afterwards. Just using Map/Get will be more flexible, everything in databases is a tradeoff and FaunaDB gives you both options :). I would suggest to use such an approach from the moment your datamodel is fixed and you see a specific part in your app that you want to optimise.
Avoiding MapGet
The second question on Map/Get requires some explanation. Separating out the values that you will search on from the places (as Ben did) is a great idea if you want to use Join to get the actual places more efficiently. This will not require a Map Get and therefore cost you far less reads but do notice that Join is rather a traverse (it'll replace the current references with the target references it joins to) so if you need both the values and the actual place data in one object at the end of your query than you will require Map/Get. Look at it from this perspective, indexes are ridiculously cheap in terms of reads and you can go quite far with those but for some operations there is just no way around Map/Get, Get is still only 1 read. Given that you get 100 000 for free per day that is still not expensive :). You could keep your pages also relatively small (size parameter in paginate) to make sure you don't do unnecessary gets unless your users or app requires more pages.
For people reading this that do not know this yet:
1 index page === 1 read
1 get === 1 read
Final notes
We can and will make this easier in the future. However, note that you are working with a scalable distributed database and often these things are just not even possible in other solutions or very inefficient. FaunaDB provides you with very powerful structures and raw access to how indexes work and gives you many options. It does not try to be clever for you behind the scenes as this might result in very inefficient queries in case we get it wrong (that would be a bummer in a scalable pay-as-you-go system).
There are a couple of misconceptions that I think are leading you astray. The most important one: Match(Index($x)) generates a set reference, which is an ordered set of tuples. The tuples correspond to the array of fields that are present in the values section of an index. By default this will just be a one-tuple containing a reference to a document in the collection selected by the index. Range operates on a set reference and knows nothing about the terms used to the select the returned set ref. So how do we compose the query?
Starting from first principles. Lets imagine we just had this stuff in memory. If we had a set of (attribute, scores) ordered by attribute, score then taking only those where attribute == $attribute would get us close, and then filtering by score > $score would get us what we wanted. This corresponds exactly to a range query over scores with attributes as terms, assuming we modeled the attribute value pairs as documents. We can also embed pointers back to the location so we can retrieve that as well in the same query. Enough chatter, lets do it:
First stop: our collections.
jnr> CreateCollection({name: "place_attribute"})
{
ref: Collection("place_attribute"),
ts: 1588528443250000,
history_days: 30,
name: 'place_attribute'
}
jnr> CreateCollection({name: "place"})
{
ref: Collection("place"),
ts: 1588528453350000,
history_days: 30,
name: 'place'
}
Next up some data. We'll chose a couple of places and give them some attributes.
jnr> Create(Collection("place"), {data: {"name": "mullion"}})
jnr> Create(Collection("place"), {data: {"name": "church cove"}})
jnr> Create(Collection("place_attribute"), {data: {"attribute": "swimming", "score": 3, "place": Ref(Collection("place"), 264525084639625739)}})
jnr> Create(Collection("place_attribute"), {data: {"attribute": "hiking", "score": 1, "place": Ref(Collection("place"), 264525084639625739)}})
jnr> Create(Collection("place_attribute"), {data: {"attribute": "hiking", "score": 7, "place": Ref(Collection("place"), 264525091487875586)}})
Now for the more interesting part. The index.
jnr> CreateIndex({name: "attr_score", source: Collection("place_attribute"), terms:[{"field":["data", "attribute"]}], values:[{"field": ["data", "score"]}, {"field": ["data", "place"]}]})
{
ref: Index("attr_score"),
ts: 1588529816460000,
active: true,
serialized: true,
name: 'attr_score',
source: Collection("place_attribute"),
terms: [ { field: [ 'data', 'attribute' ] } ],
values: [ { field: [ 'data', 'score' ] }, { field: [ 'data', 'place' ] } ],
partitions: 1
}
Ok. A simple query. Who has Hiking?
jnr> Paginate(Match(Index("attr_score"), "hiking"))
{
data: [
[ 1, Ref(Collection("place"), "264525084639625730") ],
[ 7, Ref(Collection("place"), "264525091487875600") ]
]
}
Without too much imagination one could sneak a Get call into that to pull the place out.
What about only hiking with a score over 5? We have an ordered set of tuples, so just supplying the first component (the score) is enough to get us what we want.
jnr> Paginate(Range(Match(Index("attr_score"), "hiking"), [5], null))
{ data: [ [ 7, Ref(Collection("place"), "264525091487875600") ] ] }
What about a compound condition? Hiking under 5 and swimming (any score). This is where things take a bit of a turn. We want to model conjunction, which in fauna means intersecting sets. The problem we have is that up until now we have been using an index that returns the score as well as the place ref. For intersection to work we need just the refs. Time for a sleight of hand:
jnr> Get(Index("doc_by_doc"))
{
ref: Index("doc_by_doc"),
ts: 1588530936380000,
active: true,
serialized: true,
name: 'doc_by_doc',
source: Collection("place"),
terms: [ { field: [ 'ref' ] } ],
partitions: 1
}
What's the point of such an index you ask? Well we can use it to drop any data we like from any index and be left with just the refs via join. This gives us the place refs with a hiking score less than 5 (the empty array sorts before anything, so works as a placeholder for a lower bound).
jnr> Paginate(Join(Range(Match(Index("attr_score"), "hiking"), [], [5]), Lambda(["s", "p"], Match(Index("doc_by_doc"), Var("p")))))
{ data: [ Ref(Collection("place"), "264525084639625739") ] }
So finally the piece de resistance: all places with swimming and (hiking < 5):
jnr> Let({
... hiking: Join(Range(Match(Index("attr_score"), "hiking"), [], [5]), Lambda(["s", "p"], Match(Index("doc_by_doc"), Var("p")))),
... swimming: Join(Match(Index("attr_score"), "swimming"), Lambda(["s", "p"], Match(Index("doc_by_doc"), Var("p"))))
... },
... Map(Paginate(Intersection(Var("hiking"), Var("swimming"))), Lambda("ref", Get(Var("ref"))))
... )
{
data: [
{
ref: Ref(Collection("place"), "264525084639625739"),
ts: 1588529629270000,
data: { name: 'mullion' }
}
]
}
Tada. This could be neatened up a lot with a couple of udfs, exercise left to the reader. Conditions involving or can be managed with union in much the same way.
Easy way to query with the multiple conditions I think with the query it with documents differences, In my solutions it is like:
const response = await client.query(
q.Let(
{
activeUsers: q.Difference(
q.Match(q.Index("allUsers")),
q.Match(q.Index("usersByStatus"), "ARCHIVE")
),
paginatedDocuments: q.Map(
q.Paginate(q.Var("activeUsers"), {
size,
before: reqBefore,
after: reqAfter
}),
q.Lambda("x", q.Get(q.Var("x")))
),
total: q.Count(q.Var("activeUsers"))
},
{
documents: q.Var("paginatedDocuments"),
total: q.Var("total")
}
)
);
const {
documents: {
data: dbData = [],
before: dbBefore = [],
after: dbAfter = []
} = {},
total = 0
} = response || {};
const respBefore = dbBefore[0]?.value?.id || null;
const respAfter = dbAfter[0]?.value?.id || null;
const data = await dbData.map((userData) => {
const {
ref: { id = null } = {},
data: { firstName = "", lastName = "" }
} = userData;
return {
id,
firstName,
lastName
};
});
So in the query builder you can filter each nested document in variable in Let section by the index that you want.
Here is the another variant of filtering, in SQL looks like:
SELECT * FROM clients WHERE salary > 2000 AND age > 30;
For fauna query:
const response = await client.query(
q.Let(
{
allClients: q.Match(q.Index("allClients")),
filteredClients: q.Filter(
q.Var("allClients"),
q.Lambda(
"client",
q.And(
q.GT(q.Select(["data", "salary"], q.Get(q.Var("client"))), 2000),
q.GT(q.Select(["data", "age"], q.Get(q.Var("client"))), 30)
)
)
),
paginatedDocuments: q.Map(
q.Paginate(q.Var("filteredClients")),
q.Lambda("x", q.Get(q.Var("x")))
),
total: q.Count(q.Var("filteredClients"))
},
{
documents: q.Var("paginatedDocuments"),
total: q.Var("total")
}
)
);
This is some kind of filtering in javascript where the condition if returns true so it will be in the result of the response. Example:
const filteredClients = allClients.filter((client) => {
const { salary, age } = client;
return ( salary > 2000 ) && (age > 30)
})
{
"movies": {
"movie1": {
"genre": "comedy",
"name": "As good as it gets",
"lead": "Jack Nicholson"
},
"movie2": {
"genre": "Horror",
"name": "The Shining",
"lead": "Jack Nicholson"
},
"movie3": {
"genre": "comedy",
"name": "The Mask",
"lead": "Jim Carrey"
}
}
}
I am a Firebase newbie. How can I retrieve a result from the data above where genre = 'comedy' AND lead = 'Jack Nicholson'?
What options do I have?
Using Firebase's Query API, you might be tempted to try this:
// !!! THIS WILL NOT WORK !!!
ref
.orderBy('genre')
.startAt('comedy').endAt('comedy')
.orderBy('lead') // !!! THIS LINE WILL RAISE AN ERROR !!!
.startAt('Jack Nicholson').endAt('Jack Nicholson')
.on('value', function(snapshot) {
console.log(snapshot.val());
});
But as #RobDiMarco from Firebase says in the comments:
multiple orderBy() calls will throw an error
So my code above will not work.
I know of three approaches that will work.
1. filter most on the server, do the rest on the client
What you can do is execute one orderBy().startAt()./endAt() on the server, pull down the remaining data and filter that in JavaScript code on your client.
ref
.orderBy('genre')
.equalTo('comedy')
.on('child_added', function(snapshot) {
var movie = snapshot.val();
if (movie.lead == 'Jack Nicholson') {
console.log(movie);
}
});
2. add a property that combines the values that you want to filter on
If that isn't good enough, you should consider modifying/expanding your data to allow your use-case. For example: you could stuff genre+lead into a single property that you just use for this filter.
"movie1": {
"genre": "comedy",
"name": "As good as it gets",
"lead": "Jack Nicholson",
"genre_lead": "comedy_Jack Nicholson"
}, //...
You're essentially building your own multi-column index that way and can query it with:
ref
.orderBy('genre_lead')
.equalTo('comedy_Jack Nicholson')
.on('child_added', function(snapshot) {
var movie = snapshot.val();
console.log(movie);
});
David East has written a library called QueryBase that helps with generating such properties.
You could even do relative/range queries, let's say that you want to allow querying movies by category and year. You'd use this data structure:
"movie1": {
"genre": "comedy",
"name": "As good as it gets",
"lead": "Jack Nicholson",
"genre_year": "comedy_1997"
}, //...
And then query for comedies of the 90s with:
ref
.orderBy('genre_year')
.startAt('comedy_1990')
.endAt('comedy_2000')
.on('child_added', function(snapshot) {
var movie = snapshot.val();
console.log(movie);
});
If you need to filter on more than just the year, make sure to add the other date parts in descending order, e.g. "comedy_1997-12-25". This way the lexicographical ordering that Firebase does on string values will be the same as the chronological ordering.
This combining of values in a property can work with more than two values, but you can only do a range filter on the last value in the composite property.
A very special variant of this is implemented by the GeoFire library for Firebase. This library combines the latitude and longitude of a location into a so-called Geohash, which can then be used to do realtime range queries on Firebase.
3. create a custom index programmatically
Yet another alternative is to do what we've all done before this new Query API was added: create an index in a different node:
"movies"
// the same structure you have today
"by_genre"
"comedy"
"by_lead"
"Jack Nicholson"
"movie1"
"Jim Carrey"
"movie3"
"Horror"
"by_lead"
"Jack Nicholson"
"movie2"
There are probably more approaches. For example, this answer highlights an alternative tree-shaped custom index: https://stackoverflow.com/a/34105063
If none of these options work for you, but you still want to store your data in Firebase, you can also consider using its Cloud Firestore database.
Cloud Firestore can handle multiple equality filters in a single query, but only one range filter. Under the hood it essentially uses the same query model, but it's like it auto-generates the composite properties for you. See Firestore's documentation on compound queries.
I've written a personal library that allows you to order by multiple values, with all the ordering done on the server.
Meet Querybase!
Querybase takes in a Firebase Database Reference and an array of fields you wish to index on. When you create new records it will automatically handle the generation of keys that allow for multiple querying. The caveat is that it only supports straight equivalence (no less than or greater than).
const databaseRef = firebase.database().ref().child('people');
const querybaseRef = querybase.ref(databaseRef, ['name', 'age', 'location']);
// Automatically handles composite keys
querybaseRef.push({
name: 'David',
age: 27,
location: 'SF'
});
// Find records by multiple fields
// returns a Firebase Database ref
const queriedDbRef = querybaseRef
.where({
name: 'David',
age: 27
});
// Listen for realtime updates
queriedDbRef.on('value', snap => console.log(snap));
var ref = new Firebase('https://your.firebaseio.com/');
Query query = ref.orderByChild('genre').equalTo('comedy');
query.addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
for (DataSnapshot movieSnapshot : dataSnapshot.getChildren()) {
Movie movie = dataSnapshot.getValue(Movie.class);
if (movie.getLead().equals('Jack Nicholson')) {
console.log(movieSnapshot.getKey());
}
}
}
#Override
public void onCancelled(FirebaseError firebaseError) {
}
});
Frank's answer is good but Firestore introduced array-contains recently that makes it easier to do AND queries.
You can create a filters field to add you filters. You can add as many values as you need. For example to filter by comedy and Jack Nicholson you can add the value comedy_Jack Nicholson but if you also you want to by comedy and 2014 you can add the value comedy_2014 without creating more fields.
{
"movies": {
"movie1": {
"genre": "comedy",
"name": "As good as it gets",
"lead": "Jack Nicholson",
"year": 2014,
"filters": [
"comedy_Jack Nicholson",
"comedy_2014"
]
}
}
}
For Cloud Firestore
https://firebase.google.com/docs/firestore/query-data/queries#compound_queries
Compound queries
You can chain multiple equality operators (== or array-contains) methods to create more specific queries (logical AND). However, you must create a composite index to combine equality operators with the inequality operators, <, <=, >, and !=.
citiesRef.where('state', '==', 'CO').where('name', '==', 'Denver');
citiesRef.where('state', '==', 'CA').where('population', '<', 1000000);
You can perform range (<, <=, >, >=) or not equals (!=) comparisons only on a single field, and you can include at most one array-contains or array-contains-any clause in a compound query:
Firebase doesn't allow querying with multiple conditions.
However, I did find a way around for this:
We need to download the initial filtered data from the database and store it in an array list.
Query query = databaseReference.orderByChild("genre").equalTo("comedy");
databaseReference.addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(#NonNull DataSnapshot dataSnapshot) {
ArrayList<Movie> movies = new ArrayList<>();
for (DataSnapshot dataSnapshot1 : dataSnapshot.getChildren()) {
String lead = dataSnapshot1.child("lead").getValue(String.class);
String genre = dataSnapshot1.child("genre").getValue(String.class);
movie = new Movie(lead, genre);
movies.add(movie);
}
filterResults(movies, "Jack Nicholson");
}
}
#Override
public void onCancelled(#NonNull DatabaseError databaseError) {
}
});
Once we obtain the initial filtered data from the database, we need to do further filter in our backend.
public void filterResults(final List<Movie> list, final String genre) {
List<Movie> movies = new ArrayList<>();
movies = list.stream().filter(o -> o.getLead().equals(genre)).collect(Collectors.toList());
System.out.println(movies);
employees.forEach(movie -> System.out.println(movie.getFirstName()));
}
The data from firebase realtime database is as _InternalLinkedHashMap<dynamic, dynamic>.
You can also just convert this it to your map and query very easily.
For example, I have a chat app and I use realtime database to store the uid of the user and the bool value whether the user is online or not. As the picture below.
Now, I have a class RealtimeDatabase and a static method getAllUsersOnineStatus().
static getOnilineUsersUID() {
var dbRef = FirebaseDatabase.instance;
DatabaseReference reference = dbRef.reference().child("Online");
reference.once().then((value) {
Map<String, bool> map = Map<String, bool>.from(value.value);
List users = [];
map.forEach((key, value) {
if (value) {
users.add(key);
}
});
print(users);
});
}
It will print [NOraDTGaQSZbIEszidCujw1AEym2]
I am new to flutter If you know more please update the answer.
ref.orderByChild("lead").startAt("Jack Nicholson").endAt("Jack Nicholson").listner....
This will work.