api side validateUniqueness for multiple field in RedwoodJS - validates-uniqueness-of

According to rw docs we can check uniqueness of fields before starting process like this
validateUniqueness('user', { filed: value }, (db)=>{})
But how can we validate multiple field at same time.
Note: below wont operate correctly
validateUniqueness('user', { filed1: value, filed2: value }, (db)=>{})

Related

KeystoneJS `filter` vs `Item` list access control

I am trying to understand more in depth the difference between filter and item access control.
Basically I understand that Item access control is, sort of, higher order check and will run before the GraphQL filter.
My question is, if I am doing a filter on a specific field while updating, for instance a groupID or something like this, do I need to do the same check in Item Access Control?
This will cause an extra database query that will be part of the filter.
Any thoughts on that?
The TL;DR answer...
if I am doing a filter on a specific field [..] do I need to do the same check in Item Access Control?
No, you only need to apply the restriction in one place or the other.
Generally speaking, if you can describe the restriction using filter access control (ie. as a graphQL-style filter, with the args provided) then that's the best place to do it. But, if your access control needs to behave differently based on values in the current item or the specific changes being made, item access control may be required.
Background
Access control in Keystone can be a little hard to get your head around but it's actually very powerful and the design has good reasons behind it. Let me attempt to clarify:
Filter access control is applied by adding conditions to the queries run against the database.
Imagine a content system with lists for users and posts. Users can author a post but some posts are also editable by everyone. The Post list config might have something like this:
// ..
access: {
filter: {
update: () => ({ isEditable: { equals: true } }),
}
},
// ..
What that's effectively doing is adding a condition to all update queries run for this list. So if you update a post like this:
mutation {
updatePost(where: { id: "123"}, data: { title: "Best Pizza" }) {
id name
}
}
The SQL that runs might look like this:
update "Post"
set title = 'Best Pizza'
where id = 234 and "isEditable" = true;
Note the isEditable condition that's automatically added by the update filter. This is pretty powerful in some ways but also has its limits – filter access control functions can only return GraphQL-style filters which prevents them from operating on things like virtual fields, which can't be filtered on (as they don't exist in the database). They also can't apply different filters depending on the item's current values or the specific updates being performed.
Filter access control functions can access the current session, so can do things like this:
filter: {
// If the current user is an admin don't apply the usual filter for editability
update: (session) => {
return session.isAdmin ? {} : { isEditable: { equals: true } };
},
}
But you couldn't do something like this, referencing the current item data:
filter: {
// ⚠️ this is broken; filter access control functions don't receive the current item ⚠️
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId ? {} : { isEditable: { equals: true } };
},
}
The benefit of filter access control is it doesn't force Keystone to read an item before an operation occurs; the filter is effectively added to the operation itself. This can makes them more efficient for the DB but does limit them somewhat. Note that things like hooks may also cause an item to be read before an operation is performed so this performance difference isn't always evident.
Item access control is applied in the application layer, by evaluating the JS function supplied against the existing item and/or the new data supplied.
This makes them a lot more powerful in some respects. You can, for example, implement the previous use case, where authors are allowed to update their own posts, like this:
item: {
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId || item.isEditable;
},
}
Or add further restrictions based on the specific updates being made, by referencing the inputData argument.
So item access control is arguably more powerful but they can have significant performance implications – not so much for mutations which are likely to be performed in small quantities, but definitely for read operations. In fact, Keystone won't let you define item access control for read operations. If you stop and think about this, you might see why – doing so would require reading all items in the list out of the DB and running the access control function against each one, every time a list was read. As such, the items accessible can only be restricted using filter access control.
Tip: If you think you need item access control for reads, consider putting the relevant business logic in a resolveInput hook that flattens stores the relevant values as fields, then referencing those fields using filter access control.
Hope that helps

Error trying to reorder items within another list in Keystone 6

I'm using KeystoneJS v6. I'm trying to enable functionality which allow me to reorder the placement of images when used in another list. Currently i'm setting up the image list below, however I'm unable to set the defaultIsOrderable to true due to the error pasted.
KeystoneJS list:
Image: list({
fields: {
title: text({
validation: { isRequired: true },
isIndexed: 'unique',
isFilterable: true,
isOrderable: true,
}),
images: cloudinaryImage({
cloudinary: {
cloudName: process.env.CLOUDINARY_CLOUD_NAME,
apiKey: process.env.CLOUDINARY_API_KEY,
apiSecret: process.env.CLOUDINARY_API_SECRET,
folder: process.env.CLOUDINARY_API_FOLDER,
},
}),
},
defaultIsOrderable: true
}),
Error message:
The expected type comes from property 'defaultIsOrderable' which is declared here on type 'ListConfig<BaseListTypeInfo, BaseFields<BaseListTypeInfo>>'
Peeking at the definition of the field shows
defaultIsOrderable?: false | ((args: FilterOrderArgs<ListTypeInfo>) => MaybePromise<boolean>);
Looking at the schema API docs, the defaultIsOrderable lets you set:
[...] the default value to use for isOrderable for fields on this list.
You're trying to set this to true but, according to the relevant section of the field docs, the isOrderable field option already defaults to true.
I believe this is why the defaultIsOrderable type doesn't allow you to supply the true literal – doing so would be redundant.
So that explains the specific error your getting but I think you also may have misunderstood the purpose of the orderBy option.
The OrderBy Option
The field docs mention the two effects the field OrderBy option has:
If true (default), the GraphQL API and Admin UI will support ordering by this field.
Take, for example, your Image list above.
As the title field is "orderable", it is included in the list's orderBy GraphQL type (ImageOrderByInput).
When querying the list, you can order the results by the values in this field, like this:
query {
images (orderBy: [{ title: desc }]) {
id
title
images { publicUrl }
}
}
The GraphQL API docs have some details on this.
You can also use the field to order items when listing them in the Admin UI, either by clicking the column heading or selecting the field from the "sort" dropdown:
Note though, these features order items at runtime, by the values stored in orderable fields.
They don't allow an admin to "re-order" items in the Admin UI (unless you did so by changing the image titles in this case).
Specifying an Order
If you want to set the order of items within a list you'd need to store separate values in, for example, a displayOrder field like this:
Image: list({
fields: {
title: text({
validation: { isRequired: true },
isIndexed: 'unique',
isFilterable: true,
}),
displayOrder: integer(),
// ...
},
}),
Unfortunately Keystone doesn't yet give you a great way to manage this the Admin UI (ie. you can't "drag and drop" in the list view or anything like that). You need to edit each item individually to set the displayOrder values.
Ordering Within a Relationship
I notice your question says you're trying to "reorder the placement of images when used in another list" (emphasis mine).
In this case you're talking about relationships, which changes the problem somewhat. Some approaches are..
If the relationship is one-to-many, you can use the displayOrder: integer() solution shown above but the UX is worse again. You're still setting the order values against each item but not in the context of the relationship. However, querying based on these order values and setting them via the GraphQL API should be fairly straight forward.
If the relationship is many-to-many, it's similar but you can't store the "displayOrder" value in the Image list as any one image may be linked to multiple other items. You need to store the order info "with" the relationship itself. It's not trivial but my recent answer on storing additional values on a many-to-many relationship may point you in the right direction.
A third option is to not use the relationship field at all but to link items using the inline relationships functionality of the document field. This is a bit different to work with - easier to manage from the Admin UI but less powerful in GraphQL as you can't traverse the relationship as easily. However it does give you a way to manage a small, ordered set of related items in a many-to-many relationship.
You can save an ordered set of ids to a json field. This is similar to using a document field but a more manual.
Hopefully that clears up what's possible with the current "orderBy" functionality and relationship options. Which of these solutions is most appropriate depends heavily on the specifics of your project and use case.
Note too, there are plans to extend Keystone's functionality for sorting and reordering lists from both the DX and UX perspectives.
See "Sortable lists" on the Keystone roadmap.

Use Postgres generated columns in Sequelize model

I have a table where it's beneficial to generate a pre-calculated value in the database engine rather than in my application code. For this, I'm using Postgres' generated column feature. The SQL is like this:
ALTER TABLE "Items"
ADD "generatedValue" DOUBLE PRECISION GENERATED ALWAYS AS (
LEAST("someCol", "someOtherCol")
) STORED;
This works well, but I'm using Sequelize with this database. I want to find a way to define this column in my model definition, so that Sequelize will query it, not attempt to update a row's value for that column, and ideally will create the column on sync.
class Item extends Sequelize.Model {
static init(sequelize) {
return super.init({
someCol: Sequelize.DOUBLE,
someOtherColl: Sequelize.DOUBLE,
generatedValue: // <<<-- What goes here??
});
}
}
How can I do this with Sequelize?
I can specify the column as a DOUBLE, and Sequelize will read it, but the column won't be created correctly on sync. Perhaps there's some post-sync hook I can use? I was considering afterSync to drop the column and re-add it with my generated value statement, but I would first need to detect that the column wasn't already converted or I would lose my data. (I run sync [without force: true] on every app startup.)
Any thoughts, or alternative ideas would be appreciated.
Until Sequelize supports readOnly fields and the GENERATED datatype, you can get around Sequelize with a custom datatype:
const Item = sequelize.define('Item', {
someCol: { type: DataTypes.DOUBLE },
someOtherCol: { type: DataTypes.DOUBLE },
generatedValue: {
type: 'DOUBLE PRECISION GENERATED ALWAYS AS (LEAST("someCol", "someOtherCol")) STORED',
set() {
throw new Error('generatedValue is read-only')
},
},
})
This will generate the column correctly in postgres when using sync(), and prevent setting the generatedValue in javascript by throwing an Error.
Assuming that sequelize never tries to update the field if it hasn't changed, as specified in https://sequelize.org/master/manual/model-instances.html#change-awareness-of-save, then it should work.

Zapier lazy load input fields choices

I'm building a Zapier app for a platform that have dynamic fields. I have an API that returns the list of fields for one of my resource (for example) :
[
{ name: "First Name", key: "first_name", type: "String" },
{ name: "Civility", key: "civility", type: "Multiple" }
]
I build my action's inputFields based on this API :
create: {
[...],
operation: {
inputFields: [
fetchFields()
],
[...]
},
}
The API returns type that are list of values (i.e : Civility), but to get these values I have to make another API call.
For now, what I have done is in my fetchFields function, each time I encounter a type: "Multiple", I do another API call to get the possible values and set it as choices in my input field. However this is expensive and the page on Zapier takes too much time to display the fields.
I tried to use the z.dehydrate feature provided by Zapier but it doesn't work for input choices.
I can't use a dynamic dropdown here as I can't pass the key of the field possible value I'm looking for. For example, to get back the possible values for Civility, I'll need to pass the civility key to my API.
What are the options in this case?
David here, from the Zapier Platform team.
Thanks for writing in! I think what you're doing is possible, but I'm also not 100% that I understand what you're asking.
You can have multiple API calls in the function (which it sounds like you are). In the end, the function should return an array of Field objects (as descried here).
The key thing you might not be aware of is that subsequent steps have access to a partially-filled bundle.inputData, so you can have a first function that gets field options and allows a user to select something, then a second function that runs and pulls in fields based on that choice.
Otherwise, I think a function that does 2 api calls (one to fetch the field types and one to turn them into Zapier field objects) is the best bet.
If this didn't answer your question, feel free to email partners#zapier.com or join the slack org (linked at the bottom of the readme) and we'll try to solve it there.

canEdit not working in dgrid while loading by default

I have a tree grid with following ui requirements on edit.
Cost column for certain rows are editable.
Editable rows should be available for edit by default always and not
based on any event.
Each row has min max range. As and when the user enters a value that
needs to be validated.
Here is the column structure I have defined for dgrid.
var columns = [
tree({label: "Name", field:"name" }),
{ label : "Description", field:"description" },
editor({label: "Cost", field: "cost", canEdit : function(rowItem){ return rowItem.isEditable;}}, dijit.form.NumberTextBox),
{label:"Min - Max Range", field:"minRange", get:getMinMax, id:'minMax'}
];
Though the tree and edit is working fine, I have few issues to be resolved.
When editOn is not provided for editor, the column is made editable
by default. However, canEdit is getting invoked only when we provide
spl event in editOn parameter. Is there a way to get canEdit invoked
even during default load.
I need to set a range constraint for NumberTextBox dynamically for
each row. Is there an easy way to set the constraint based on row
value.
Thank you very much for your help
As for canEdit invoked when editOn is false check:
https://github.com/SitePen/dgrid/issues/623
As for dynamic set value based on row values you can try :
on the widget level Extend the Widget : in
startup after inherited(arguments)
var _row=this.grid.grid.row(this.domNode.parentNode);
this.query={myParam:_row.data.maxRange};
Tsemach.