I'm building a sample app to evaluate FaunaDB and Nextjs
My plan is to have the web app authenticate separately, then create the user on FaunaDB
Afterwards create a token on FaunaDB, and allow the user to connect through his own secret token
I believe I'm on the right track to get this model to work, but I'm facing an issue with the Custom-role in FaunaDB
The data model is User -> Board -> Tasks, and I will use the access to boards in this question
Here is the code for the custom role
{
ref: Role("Free_Tier_Role"),
ts: 1601934616790000,
name: "Free_Tier_Role",
membership: [
{
resource: Collection("user"),
predicate: Query(
Lambda("ref", Select(["data", "isEnabled"], Get(Var("ref"))))
)
}
],
privileges: [
{
resource: Collection("user"),
actions: {
read: true,
write: false,
create: false,
delete: false,
history_read: false,
history_write: false,
unrestricted_read: false
}
},
{
resource: Collection("board"),
actions: {
read: Query(
Lambda(
"ref",
Equals(Identity(), Select(["data", "owner"], Get(Var("ref"))))
)
),
write: Query(
Lambda(
["oldData", "newData"],
And(
Equals(
Select("id", Identity()),
Select(["data", "owner"], Var("oldData"))
),
Equals(
Select(["data", "owner"], Var("oldData")),
Select(["data", "owner"], Var("newData"))
)
)
)
),
create: Query(
Lambda(
"newData",
And(
Equals(Identity(), Select(["data", "owner"], Var("newData"))),
LT(Count(Match(Index("board_by_owner"), Identity())), 3)
)
)
),
delete: Query(
Lambda(
"ref",
Equals(Identity(), Select(["data", "owner"], Get(Var("ref"))))
)
),
history_read: false,
history_write: false,
unrestricted_read: false
}
},
{
resource: Collection("task"),
actions: {
read: Query(
Lambda(
"ref",
Equals(Identity(), Select(["data", "owner"], Get(Var("ref"))))
)
),
write: Query(
Lambda(
["oldData", "newData"],
And(
Equals(
Select("id", Identity()),
Select(["data", "owner"], Var("oldData"))
),
Equals(
Select(["data", "owner"], Var("oldData")),
Select(["data", "owner"], Var("newData"))
)
)
)
),
create: Query(
Lambda(
"newData",
And(
Equals(Identity(), Select(["data", "owner"], Var("newData"))),
LT(Count(Match(Index("task_by_owner"), Identity())), 10)
)
)
),
delete: Query(
Lambda(
"ref",
Equals(Identity(), Select(["data", "owner"], Get(Var("ref"))))
)
),
history_read: false,
history_write: false,
unrestricted_read: false
}
},
{
resource: Index("task_by_owner"),
actions: {
unrestricted_read: false,
read: false
}
},
{
resource: Index("board_by_owner"),
actions: {
unrestricted_read: false,
read: false
}
}
]
}
The problem I'm facing is
When I login through a user token, and that user is the owner for a board, I get an empty list
> Map(Paginate(Documents(Collection('board'))),Lambda('x', Get(Var('x'))))
{ data: [] }
To test they have the same value, I'm running this command on the shell on the dashboard
Select(["data", "owner"], Get(Ref(Collection("board"), "278575744915866117")))
Ref(Collection("user"), "278571699875611143")
>> Time elapsed: 28ms
And run the Identity() on my token-authenticated instance
> Identity()
Ref(Collection("user"), "278571699875611143")
>
P.S. before this approach, I was matching the id number only using Select(['data', 'ownerId'], Ref) but it didn't work, even when I tried converting both ToString or ToNumber
Wow, took me about 2 days to diagnose what was going on
Basically, the code I've written above is working to only allow an owner of a board or task to see his/her own items, and update them
But, the problem for me was with membership, it was not working as expected
Here's the solution steps you need to follow if you find yourself with non-working permissions
Make sure the role applies to your user
Here's how I did it
Create a sample function 'test_newFunc' using the default code Query(Lambda("x", Add(Var("x"), Var("x"))))
Create a new role, and include all 'user', and allow calling the 'test_newFunc'
Create the token, and start the shell using the token's secret
Run Call("test_newFunc", 2)
For me, I added the membership predicate to be
Lambda("docRef", Equals(true, Select(["data", "isEnabled"], Get(Var("docRef")))))
Which means, the user must have a data field "isEnabled", and its value must be true
Then test by switching the value of this field for the impersonated user until you confirm that this role is being applied
Once this step is clear, then you can test the predicates for each permission
Hope this is helpful for future developers who run into this issue
Related
The following query:
Paginate(Documents(Collection("backyard"))),
Lambda(
"f",
Let(
{
backyard: Get(Var("f")),
user: Get(Select(["data", "user"], Var("backyard")))
},
{
backyard: Var("backyard"),
user: Var("user")
}
)
)
)
results to:
{
data: [
{
backyard: {
ref: Ref(Collection("backyard"), "333719283470172352"),
ts: 1654518359560000,
data: {
user: Ref(Collection("user"), "333718599460978887"),
product: "15358",
date: "2022-06-06",
counter: "1"
}
},
user: {
ref: Ref(Collection("user"), "333718599460978887"),
ts: 1654517707220000,
data: {
email: "<email>",
name: "Paolo"
}
}
},
{
backyard: {
ref: Ref(Collection("backyard"), "333747850716381384"),
ts: 1654545603400000,
data: {
user: Ref(Collection("user"), "333718599460978887"),
product: "15358",
date: "2022-06-08",
counter: "4"
}
},
user: {
ref: Ref(Collection("user"), "333718599460978887"),
ts: 1654517707220000,
data: {
email: "<email>",
name: "Paolo"
}
}
}
]
}
How can I filter backyard by date without losing the nested users?
I tried:
Map(
Paginate(Range(Match(Index("backyard_by_date")), "2022-05-08", "2022-06-08")),
Lambda(
"f",
Let(
{
backyard: Get(Var("f")),
user: Get(Select(["data", "user"], Var("backyard")))
},
{
backyard: Var("backyard"),
user: Var("user")
}
)
)
)
However, the resultset is an empty array and the following already returns an empty array:
Paginate(Range(Match(Index("backyard_by_date")), "2022-05-08", "2022-06-08"))
My index:
{
name: "backyard_by_date",
unique: false,
serialized: true,
source: "backyard"
}
Maybe I have to adjust my index? The following helped me a lot:
How to get nested documents in FaunaDB?
How to Get Data from two collection in faunadb
how to join collections in faunadb?
Your index definition is missing details. Once that gets fixed, everything else you were doing is exactly right.
In your provided index, there are no terms or values specified, which makes the backyard_by_date index a "collection" index: it only records the references of every document in the collection. In this way, it is functionally equivalent to using the Documents function but incurs additional write operations as documents are created or updated within the backyard collection.
To make your query work, you should delete your existing index and (after 60 seconds) redefine it like this:
CreateIndex({
name: "backyard_by_date",
source: Collection("backyard"),
values: [
{field: ["data", "date"]},
{field: ["ref"]}
]
})
That definition configures the index to return the date field and the reference for every document.
Let's confirm that the index returns what we expect:
> Paginate(Match(Index("backyard_by_date")))
{
data: [
[ '2022-06-06', Ref(Collection("backyard"), "333719283470172352") ],
[ '2022-06-08', Ref(Collection("backyard"), "333747850716381384") ]
]
}
Placing the date field's value first means that we can use it effectively in Range:
> Paginate(Range(Match(Index("backyard_by_date")), "2022-05-08", "2022-06-08"))
{
data: [
[ '2022-06-06', Ref(Collection("backyard"), "333719283470172352") ],
[ '2022-06-08', Ref(Collection("backyard"), "333747850716381384") ]
]
}
And to verify that Range is working as expected:
> Paginate(Range(Match(Index("backyard_by_date")), "2022-06-07", "2022-06-08"))
{
data: [
[ '2022-06-08', Ref(Collection("backyard"), "333747850716381384") ]
]
}
Now that we know the index is working correctly, your filter query needs a few adjustments:
> Map(
Paginate(
Range(Match(Index("backyard_by_date")), "2022-05-08", "2022-06-08")
),
Lambda(
["date", "ref"],
Let(
{
backyard: Get(Var("ref")),
user: Get(Select(["data", "user"], Var("backyard")))
},
{
backyard: Var("backyard"),
user: Var("user")
}
)
)
)
{
data: [
{
backyard: {
ref: Ref(Collection("backyard"), "333719283470172352"),
ts: 1657918078190000,
data: {
user: Ref(Collection("user"), "333718599460978887"),
product: '15358',
date: '2022-06-06',
counter: '1'
}
},
user: {
ref: Ref(Collection("user"), "333718599460978887"),
ts: 1657918123870000,
data: { name: 'Paolo', email: '<email>' }
}
},
{
backyard: {
ref: Ref(Collection("backyard"), "333747850716381384"),
ts: 1657918172850000,
data: {
user: Ref(Collection("user"), "333718599460978887"),
product: '15358',
date: '2022-06-08',
counter: '4'
}
},
user: {
ref: Ref(Collection("user"), "333718599460978887"),
ts: 1657918123870000,
data: { name: 'Paolo', email: '<email>' }
}
}
]
}
Since the index returns a date string and a reference, the Lambda inside the Map has to accept those values as arguments. Aside from renaming f to ref, the rest of your query is unchanged.
I have a column in a table that is a JSON string. Part of these strings have the following format:
{
...
"rules": {
"rule_1": {
"results": [],
"isTestMode": true
},
"rule_2": {
"results": [],
"isTestMode": true
},
"rule_3": {
"results": [
{
"required": true,
"amount": 99.31
}
],
"isTestMode": true
},
"rule_4": {
"results": [],
"isTestMode": false
},
...
}
...
}
Within this nested "rules" object, I want to return true if results[0]["required"] = true AND "isTestMode" = false for any of the rules. The catch is that "rule_1", "rule_2", ... "rule_x" can have arbitrary names that aren't known in advance.
Is it possible to write a query that will iterate over all keys in '"rules"' and check if any one of them matches this condition? Is there any other way to achieve this?
If the keys were known in advance then I could do something like this:
WHERE
(JSON_ARRAY_LENGTH(JSON_EXTRACT(json, '$.rules.rule_1.results')) = 1
AND JSON_EXTRACT_SCALAR(json, '$rules.rule_1.results[0].required') = 'true'
AND JSON_EXTRACT_SCALAR(json, '$rules.rule_1.isTestMode') = 'false')
OR (JSON_ARRAY_LENGTH(JSON_EXTRACT(json, '$.rules.rule_2.results')) = 1
AND JSON_EXTRACT_SCALAR(json, '$rules.rule_2.results[0].required') = 'true'
AND JSON_EXTRACT_SCALAR(json, '$rules.rule_2.isTestMode') = 'false')
OR ...
You can extract rules property and transform it to MAP(varchar, json) and process it:
WITH dataset AS (
SELECT * FROM (VALUES
(JSON '{
"rules": {
"rule_1": {
"results": [],
"isTestMode": true
},
"rule_2": {
"results": [],
"isTestMode": true
},
"rule_3": {
"results": [
{
"required": true,
"amount": 99.31
}
],
"isTestMode": true
},
"rule_4": {
"results": [],
"isTestMode": false
}
}
}')
) AS t (json_value))
select cardinality(
filter(
map_values(cast(json_extract(json_value, '$.rules') as MAP(varchar, json))), -- trasnform into MAP and get it's values
js -> cast(json_extract(js, '$.isTestMode') as BOOLEAN) -- check isTestMode
AND cast(json_extract(js, '$.results[0].required') as BOOLEAN) -- check required of first element of `results`
)) > 0
from dataset
Which will give true for provided data.
I was able to solve this with a regex. Not ideal and would still like to know if this can be done using the built in JSON functions.
WHERE REGEXP_LIKE(json, '.*{"results":\[{"required":true,"amount":\d+.\d+"}],"isTestMode":false}.*')
I have a very simple collection with documents that look like this:
{
...
latestEdit: Time(...),
lastPublished: Time(...)
}
I would like to query all documents that have a latestEdit time that's after lastPublished time.
I find FQL to be very different to SQL and I'm finding the transition quite hard.
Any help much appreciated.
Fauna's FQL is not declarative, so you have to construct the appropriate indexes and queries to help you solve problems like this.
Fauna indexes have a feature called "bindings", which allow you to provide a user-defined function that can compute a value based on document values. The binding lets us index the computed value by itself (rather than having to index on latestEdit or lastPublished). Here's what that might look like:
CreateIndex({
name: "edit_after_published",
source: {
collection: Collection("test"),
fields: {
needsPublish: Query(
Lambda(
"doc",
Let(
{
latestEdit: Select(["data", "latestEdit"], Var("doc")),
lastPublished: Select(["data", "lastPublished"], Var("doc")),
},
If(
GT(Var("latestEdit"), Var("lastPublished")),
true,
false
)
)
)
)
}
},
terms: [ { binding: "needsPublish" } ]
})
You can see that we define a binding called needsPublish. The binding uses Let to define named values for the two document fields that we want to compare, and then the If statement checks to see if the latestEdit value is greather than lastPublished value: when it is we return true, otherwise we return false. Then, the binding is used in the index's terms definition, which defines the fields that we want to be able to search on.
I created sample documents in a collection called test, like so:
> Create(Collection("test"), { data: { name: "first", latestEdit: Now(), lastPublished: TimeSubtract(Now(), 1, "day") }})
{
ref: Ref(Collection("test"), "306026106743423488"),
ts: 1628108088190000,
data: {
name: 'first',
latestEdit: Time("2021-08-04T20:14:48.121Z"),
lastPublished: Time("2021-08-03T20:14:48.121Z")
}
}
> Create(Collection("test"), { data: { name: "second", lastPublished: Now(), latestEdit: TimeSubtract(Now(), 1, "day") }})
{
ref: Ref(Collection("test"), "306026150784664064"),
ts: 1628108130150000,
data: {
name: 'second',
lastPublished: Time("2021-08-04T20:15:30.148Z"),
latestEdit: Time("2021-08-03T20:15:30.148Z")
}
}
The first document subtracts one day from lastPublished and the second document subtracts one day from latestEdit, to test both conditions of the binding.
Then we can query for all documents where needsPublish results in true:
> Map(Paginate(Match(Index("edit_after_published"), true)), Lambda("X", Get(Var("X"))))
{
data: [
{
ref: Ref(Collection("test"), "306026106743423488"),
ts: 1628108088190000,
data: {
name: 'first',
latestEdit: Time("2021-08-04T20:14:48.121Z"),
lastPublished: Time("2021-08-03T20:14:48.121Z")
}
}
]
}
And we can also query for all documents where needsPublish is false:
> Map(Paginate(Match(Index("edit_after_published"), false)), Lambda("X", Get(Var("X"))))
{
data: [
{
ref: Ref(Collection("test"), "306026150784664064"),
ts: 1628108130150000,
data: {
name: 'second',
lastPublished: Time("2021-08-04T20:15:30.148Z"),
latestEdit: Time("2021-08-03T20:15:30.148Z")
}
}
]
}
Im' having problem with FaunaDB. I'm new to it, went through docs, but no luck. I'll jump imto showing you the code right now but SO complains that my question has too much code in it.
I'm trying to do the following SQL statement with FQL:
SELECT * FROM order
WHERE cid = '1234'
AND fulfilled = false
AND rank >= 0
AND rank <= 100
I've tried the following:
q.Paginate(
q.Intersection(
q.Match(
q.Index('order_by_cid'),
'1234',
),
q.Match(
q.Index('order_by_status'),
false,
),
q.Range(
q.Match('order_by_rank'),
[0],
[100]
),
)
)
But this returns { data: [] }
My indexes:
{
ref: Index("order_by_cid"),
ts: 1602095185756000,
active: true,
serialized: true,
name: "order_by_cid",
source: Collection("order"),
terms: [{ field: ["data", "cid"] }],
partitions: 1
}
{
ref: Index("order_by_status"),
ts: 1602163027885000,
active: true,
serialized: true,
name: "order_by_status",
source: Collection("order"),
terms: [{ field: ["data", "fulfilled"] }],
partitions: 1
}
{
ref: Index("order_by_rank"),
ts: 1602611790710000,
active: true,
serialized: true,
name: "order_by_rank",
source: Collection("order"),
values: [{ field: ["data", "rank"] }, { field: "ref" }],
partitions: 8
}
The index should be:
CreateIndex(
{
name:'refByCidFulfilled',
source:Collection("order"),
terms:[{field:['data','cid']},{field:['data','fulfilled']}],
values:[{field:['data','rank']},{field:['ref']}]
}
)
And you can query
Map(
Paginate(
Range(
Match('refByCidFulfilled',[1234,false]),[1],[100])),
Lambda(['rank','ref'],Get(Var('ref')
)
)
)
I'm using DataTables and also using server side processing (Django).
I have a seperate textfield in which I use it to custom filter data in the DataTable after the table has been rendered already.
The following works just fine (I want to custom filter columns):
var table = $('#problem_history').DataTable( {
"bJQueryUI": true,
"aaSorting": [[ 1, "desc" ]],
"aoColumns": [
// various columns here
],
"processing": true,
"serverSide": true,
"ajax": {
"url": "/getdata",
"data": {
"friend_name": 'Robert'
}
}
} );
So on the page load (initial load of the DataTable) it filters for 'Robert' just fine. But now I want to programmatically change the data to filter for "friend_name" == "Sara"
I already tried the following, the filteredData has a correct filtered object but the table itself does not redraw with the new filter.
var filteredData = table.column( 4 ).data().filter(
function ( value, index ) {
return value == 'Sara' ? true : false;
}
);
table.draw();
I also tried this but no luck:
filteredData.draw();
How can I achieve this?
Thank you for your help.
Here is a very nice explanation on how to do it:
https://datatables.net/reference/option/ajax.data
I am currently using this code:
"ajax": {"url":"/someURL/Backend",
"data": function ( d ) {
return $.extend( {}, d, {
"parameterName": $('#fieldIDName').val(),
"parameterName2": $('#fieldIDName2').val()
} );
}
}
You call it by doing the following:
$('#myselectid').change(function (e) {
table.draw();
});
If you want to submit by clicking on the button, change the .change to .click and make sure that ID is pointing to button's id in a HTML
You've almost got it. You just need to assign the filter var to
the data parameter that's passed in the datatables request:
"ajax": {
"url": "/getdata",
"data": {
"friend_name": $('#myselectid').val();
}
}
And to filter the data, just call draw() on the select change event
$('#myselectid').change(function (e) {
table.fnDraw();
});
For Basic searches, you should use the search() API:
// Invoke basic search for 'a'
dt.search('a', false)
For more complex queries, you can utilize searchBuilder backend by intercepting the ajax call through an open API. Here are some searchBuilder examples:
// Basic example:
// . (searchable_fields contains 'a'
// AND (office = Tokyo AND Salary > 100000)
// )
$('#problem_history').on('preXhr.dt', function(e, settings, data){
data['searchBuilder'] = {
'criteria': [
{'data': 'Office', 'origData': 'office', 'type': 'string'
,'condition': '='
,'value': ["Tokyo"], 'value1': "Tokyo"
}
,{'data': 'Salary', 'origData': 'salary', 'type': 'num'
,'condition': '>'
,'value': [100000], 'value1': 100000
}
]
,'logic': 'AND'
}
})
// Complex example:
// . (searchable_fields contains 'a'
// AND (
// (office = Tokyo AND Salary > 100000)
// OR (office = London AND Salary > 200000)
// )
// )
$('#problem_history').on('preXhr.dt', function(e, settings, data){
data['searchBuilder'] = {
'criteria': [
{'criteria': [
{'data': 'Office', 'origData': 'office', 'type': 'string'
,'condition': '='
,'value': ["Tokyo"], 'value1': "Tokyo"
}
,{'data': 'Salary', 'origData': 'salary', 'type': 'num'
,'condition': '>'
,'value': [100000], 'value1': 100000
}
]
,'logic': 'AND'
}
,{'criteria': [
{'data': 'Office', 'origData': 'office', 'type': 'string'
,'condition': '='
,'value': ["London"], 'value1': "London"
}
,{'data': 'Salary', 'origData': 'salary', 'type': 'num'
,'condition': '>'
,'value': [200000], 'value1': 200000
}
]
,'logic': 'AND'
}
]
,'logic': 'OR'
}
})
SearchBuilder Logic Types:
=
!=
contains
starts
ends
<
<=
>
>=
between
null
!null
SearchBuilder data value blocks:
value: [<val>] seems to always equal value1
value2: For upper bounds of 'between' logic where value1 would be lower bounds