Cannot update document by index in FaunaDB - faunadb

I'm attempting to update a document using an index in my FaunaDB collection using FQL.
Update(
Match(
Index('users_by_id'),
'user-1'
),
{
data: {
name: 'John'
}
}
)
This query gives me the following error:
Error: [
{
"position": [
"update"
],
"code": "invalid argument",
"description": "Ref expected, Set provided."
}
]
How can I update the document using the index users_by_id?

Match returns a set reference, not a document reference, because there could be zero or more matching documents.
If you are certain that there is a single document that matches, you can use Get. When you call Get with a set reference (instead of a document reference), the first item of the set is retrieved. Since Update requires a document reference, you can then use Select to retrieve the fetched document's reference.
For example:
Update(
Select(
"ref",
Get(Match(Index('users_by_id'), 'user-1'))
),
{
data: {
name: 'John'
}
}
)
If you have more than one match, you should use Paginate to "realize" the set into an array of matching documents, and then Map over the array to perform a bulk update:
Map(
Paginate(
Match(Index('users_by_id'), 'user-1')
),
Lambda(
"ref",
Update(
Var("ref"),
{
data: {
name: "John",
}
}
)
)
)
Note: For this to work, your index has to have an empty values definition, or it must explicitly define the ref field as the one and only value. If your index returns multiple fields, the Lambda function has to be updated to accept the same number of parameters as are defined in your index's values definition.

Related

Creating an index for all active items

I have a collection of documents that follow this schema {label: String, status: Number}.
I want to introduce a new field, deleted_at: Date that will hold information if a document has already been deleted. Seems like a perfect use case for an index, to be able to search for all undeleted tasks.
CreateIndex({
name: "activeTasks",
source: Collection("tasks"),
terms: [
{ field: ["data", "deleted_at"] }
]
})
And then filter by undefined / null value in shell:
Paginate(Match(Index("activeTasks"), null))
Paginate(Match(Index("activeTasks"), undefined))
It returns nothing, even for documents where I explicitly set deleted_at to null.
That's not my point, though. I want to get documents that do not have the deleted_at defined at all, so that I do not have to update the whole collection.
PS. When I add document where deleted: "test" and query for it, the shell does return the expected result.
What do I don't get?
The reason is because FaunaDB doesn't support reading empty/null value the way you think it does. You need to use a special Bindings to do that.
Make sure to check out https://docs.fauna.com/fauna/current/tutorials/indexes/bindings.html#empty for a more thorough explanation and examples.
My understanding of how bindings work would yield the following code. I haven't tested it though and I'm not sure it works.
You need a special binding index:
CreateIndex({
name: "activeTasks",
source: [{
collection: Collection("tasks"),
fields: {
null_deleted_at: Query(
Lambda(
"doc",
Equals(Select(["data", "deleted_at"], Var("doc"), null), null)
)
)
}
}],
terms: [ {binding: "null_deleted_at"} ],
})
Usage:
Map(
Paginate(Match(Index("activeTasks"), true)),
Lambda("X", Get(Var("X")))
)

How to select specific fields on FaunaDB Query Language?

I can't find anything about how to do this type of query in FaunaDB. I need to select only specifics fields from a document, not all fields. I can select one field using Select function, like below:
serverClient.query(
q.Map(
q.Paginate(q.Documents(q.Collection('products')), {
size: 12,
}),
q.Lambda('X', q.Select(['data', 'title'], q.Get(q.Var('X'))))
)
)
Forget the selectAll function, it's deprecated.
You can also return an object literal like this:
serverClient.query(
q.Map(
q.Paginate(q.Documents(q.Collection('products')), {
size: 12,
}),
q.Lambda(
'X',
{
title: q.Select(['data', 'title'], q.Get(q.Var('X')),
otherField: q.Select(['data', 'other'], q.Get(q.Var('X'))
}
)
)
)
Also you are missing the end and beginning quotation marks in your question at ['data, title']
One way to achieve this would be to create an index that returns the values required. For example, if using the shell:
CreateIndex({
name: "<name of index>",
source: Collection("products"),
values: [
{ field: ["data", "title"] },
{ field: ["data", "<another field name>"] }
]
})
Then querying that index would return you the fields defined in the values of the index.
Map(
Paginate(
Match(Index("<name of index>"))
),
Lambda("product", Var("product"))
)
Although these examples are to be used in the shell, they can easily be used in code by adding a q. in front of each built-in function.

Can I update a FaunaDB document without knowing its ID?

FaunaDB's documentation covers how to update a document, but their example assumes that I'll have the id to pass into Ref:
Ref(schema_ref, id)
client.query(
q.Update(
q.Ref(q.Collection('posts'), '192903209792046592'),
{ data: { text: "Example" },
)
)
However, I'm wondering if it's possible to update a document without knowing its id. For instance, if I have a collection of users, can I find a user by their email, and then update their record? I've tried this, but Fauna returns a 400 (Database Ref expected, String provided):
client
.query(
q.Update(
q.Match(
q.Index("users_by_email", "me#example.com")
),
{ name: "Em" }
)
)
Although Bens comments are correct, (that's the way you do it), I wanted to note that the error you are receiving is because you are missing a bracket here: "users_by_email"), "me#example.com"
The error is logical if you know that Index takes an optional database reference as second argument.
To clarify what Ben said:
If you do this you'll get another error:
Update(
Match(
Index("accounts_by_email"), "test#test.com"
),
{ data: { email: "test2#test.com"} }
)
Since Match could potentially return more then one element. It returns a set of references called a SetRef. Think of setrefs as lists that are not materialized yet. If you are certain there is only one match for that e-mail (e.g. if you set a uniqueness constraint) you can materialize it using Paginate or Get:
Get:
Update(
Select(['ref'], Get(Match(
Index("accounts_by_email"), "test#test.com"
))),
{ data: { email: 'test2#test.com'} }
)
The Get returns the complete document, we need to specify that we require the ref with Select(['ref']..
Paginate:
Update(
Select(['data', 0],
Paginate(Match(
Index("accounts_by_email"), "test#test.com"
))
),
{ data: { email: "testchanged#test.com"} }
)
You are very close! Update does require a ref. You can get one via your index though. Assuming your index has a default values setting (i.e. paging a match returns a page of refs) and you are confident that the there is a single match or the first match is the one you want then you can do Select(["ref"], Get(Match(Index("users_by_email"), "me#example.com"))) to transform your set ref to a document ref. This can then be passed into update (or to any other function that wants a document ref, like Delete).

Querying Deep JSONb Information - PostgreSQL

I have the following JSON array stored on a row:
{
"openings": [
{
"visibleFormData": {
"productName": "test"
}
}
]
}
I'm trying to get the value of productName. So far I've tried something like this:
SELECT tbl.column->'openings'->'0'->'visibleFormData'->>'productName'
The theory being that this would grab the first object (index 0) in the openings array and then grab the productName attribute from that object's visibleFormData object.
All I'm getting is null, though. I've tried multiple configurations of this. I'm thinking it has to do with the grabbing of index zero, but I am unsure. I am not a regular PSQL user, so it's proving a tad tricky to debug.
The json array index is integer, so use 0 instead of '0':
with tbl(col) as (
values
('{
"openings": [
{
"visibleFormData": {
"productName": "test"
}
}
]
}'::jsonb)
)
SELECT tbl.col->'openings'->0->'visibleFormData'->>'productName'
FROM tbl
?column?
----------
test
(1 row)

Arangodb dynamic index on object keys

Arangodb 2.8b3
Have document with some property "specification", can have 1-100 keys inside, like
document {
...
specification: {
key1: "value",
...
key10: "value"
}
}
Task fast query by specification.key
For Doc IN MyCollection FILTER Doc.specification['key1'] == "value" RETURN Doc
Tried create hash indexes with field: "specification", "specification.*", specification[*], specification[*].*
Index never used, any solution without reorganizing structure or plans for future exists?
No, we currently don't have any smart idea how to handle indices for structures like that. The memory usage would also increase since the attribute names would also have to be present in the index for each indexed value.
What we will release with 2.8 is the ability to use indices on array structures:
db.posts.ensureIndex({ type: "hash", fields: [ "tags[*]" ] });
with documents like:
{ tags: [ "foobar", "bar", "anotherTag" ] }
Using AQL queries like this:
FOR doc IN posts
FILTER 'foobar' IN doc.tags[*]
RETURN doc
You could also index documents under arrays:
db.posts.ensureIndex({ type: "hash", fields: [ "tags[*].value" ] });
db.posts.insert({
tags: [ { key: "key1", value: "foobar"},
{ key: "key2", value: "baz" },
{ key: "key3", value: "quux" }
] });
The following query will then use the array index:
FOR doc IN posts
FILTER 'foobar' IN doc.tags[*].value
RETURN doc
However, the asterisk can only be used for array accesses - it can't substitute key matches in objects.