How to insert an item into a sequence using Sequelize, or How to manage an ordering attribute - sql

I have an entity with a sequence attribute, which is an integer from 1-N for N members of the list. They are polyline points.
I want to be able to insert into the list at a given sequence point, and increment all the items at that point or beyond in the sequence to make room for the new item, and likewise if I delete then decrement everything above so we still have nice sequence ordering with no missing numbers.
There is a REST interface in this of course, but I dont want to hack about with that, I just want sequelize to magically manage this sequence number.
I am assuming I need to get hold of some "before insert" and "after delete" hooks in sequelize and issue some SQL to make this happen. Is that assumption correct or is there some cooler way of doing it.

I havent tested this, but this appears to be the solution, which is barely worth comment.
I know the modelName, and name==the attribute name,
options.hooks={
beforeInsert: function(record, options) {
return self.models[modelName].incrementAfter(name,record[name]);
},
afterDelete: function(record, options) {
return self.models[modelName].decrementAfter(name,record[name]);
}
}
and then added to my extended model prototype I have
incrementAfter:function(field,position){
return this.sequelize.query("UPDATE "+this.tableName+" SET "+field+" = "+field+"+1 WHERE "+field +" >= "+position);
},
decrementAfter:function(field,position){
return this.sequelize.query("UPDATE "+this.tableName+" SET "+field+" = "+field+"-1 WHERE "+field +" >= "+position);
},

Related

KeystoneJS `filter` vs `Item` list access control

I am trying to understand more in depth the difference between filter and item access control.
Basically I understand that Item access control is, sort of, higher order check and will run before the GraphQL filter.
My question is, if I am doing a filter on a specific field while updating, for instance a groupID or something like this, do I need to do the same check in Item Access Control?
This will cause an extra database query that will be part of the filter.
Any thoughts on that?
The TL;DR answer...
if I am doing a filter on a specific field [..] do I need to do the same check in Item Access Control?
No, you only need to apply the restriction in one place or the other.
Generally speaking, if you can describe the restriction using filter access control (ie. as a graphQL-style filter, with the args provided) then that's the best place to do it. But, if your access control needs to behave differently based on values in the current item or the specific changes being made, item access control may be required.
Background
Access control in Keystone can be a little hard to get your head around but it's actually very powerful and the design has good reasons behind it. Let me attempt to clarify:
Filter access control is applied by adding conditions to the queries run against the database.
Imagine a content system with lists for users and posts. Users can author a post but some posts are also editable by everyone. The Post list config might have something like this:
// ..
access: {
filter: {
update: () => ({ isEditable: { equals: true } }),
}
},
// ..
What that's effectively doing is adding a condition to all update queries run for this list. So if you update a post like this:
mutation {
updatePost(where: { id: "123"}, data: { title: "Best Pizza" }) {
id name
}
}
The SQL that runs might look like this:
update "Post"
set title = 'Best Pizza'
where id = 234 and "isEditable" = true;
Note the isEditable condition that's automatically added by the update filter. This is pretty powerful in some ways but also has its limits – filter access control functions can only return GraphQL-style filters which prevents them from operating on things like virtual fields, which can't be filtered on (as they don't exist in the database). They also can't apply different filters depending on the item's current values or the specific updates being performed.
Filter access control functions can access the current session, so can do things like this:
filter: {
// If the current user is an admin don't apply the usual filter for editability
update: (session) => {
return session.isAdmin ? {} : { isEditable: { equals: true } };
},
}
But you couldn't do something like this, referencing the current item data:
filter: {
// ⚠️ this is broken; filter access control functions don't receive the current item ⚠️
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId ? {} : { isEditable: { equals: true } };
},
}
The benefit of filter access control is it doesn't force Keystone to read an item before an operation occurs; the filter is effectively added to the operation itself. This can makes them more efficient for the DB but does limit them somewhat. Note that things like hooks may also cause an item to be read before an operation is performed so this performance difference isn't always evident.
Item access control is applied in the application layer, by evaluating the JS function supplied against the existing item and/or the new data supplied.
This makes them a lot more powerful in some respects. You can, for example, implement the previous use case, where authors are allowed to update their own posts, like this:
item: {
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId || item.isEditable;
},
}
Or add further restrictions based on the specific updates being made, by referencing the inputData argument.
So item access control is arguably more powerful but they can have significant performance implications – not so much for mutations which are likely to be performed in small quantities, but definitely for read operations. In fact, Keystone won't let you define item access control for read operations. If you stop and think about this, you might see why – doing so would require reading all items in the list out of the DB and running the access control function against each one, every time a list was read. As such, the items accessible can only be restricted using filter access control.
Tip: If you think you need item access control for reads, consider putting the relevant business logic in a resolveInput hook that flattens stores the relevant values as fields, then referencing those fields using filter access control.
Hope that helps

Use Postgres generated columns in Sequelize model

I have a table where it's beneficial to generate a pre-calculated value in the database engine rather than in my application code. For this, I'm using Postgres' generated column feature. The SQL is like this:
ALTER TABLE "Items"
ADD "generatedValue" DOUBLE PRECISION GENERATED ALWAYS AS (
LEAST("someCol", "someOtherCol")
) STORED;
This works well, but I'm using Sequelize with this database. I want to find a way to define this column in my model definition, so that Sequelize will query it, not attempt to update a row's value for that column, and ideally will create the column on sync.
class Item extends Sequelize.Model {
static init(sequelize) {
return super.init({
someCol: Sequelize.DOUBLE,
someOtherColl: Sequelize.DOUBLE,
generatedValue: // <<<-- What goes here??
});
}
}
How can I do this with Sequelize?
I can specify the column as a DOUBLE, and Sequelize will read it, but the column won't be created correctly on sync. Perhaps there's some post-sync hook I can use? I was considering afterSync to drop the column and re-add it with my generated value statement, but I would first need to detect that the column wasn't already converted or I would lose my data. (I run sync [without force: true] on every app startup.)
Any thoughts, or alternative ideas would be appreciated.
Until Sequelize supports readOnly fields and the GENERATED datatype, you can get around Sequelize with a custom datatype:
const Item = sequelize.define('Item', {
someCol: { type: DataTypes.DOUBLE },
someOtherCol: { type: DataTypes.DOUBLE },
generatedValue: {
type: 'DOUBLE PRECISION GENERATED ALWAYS AS (LEAST("someCol", "someOtherCol")) STORED',
set() {
throw new Error('generatedValue is read-only')
},
},
})
This will generate the column correctly in postgres when using sync(), and prevent setting the generatedValue in javascript by throwing an Error.
Assuming that sequelize never tries to update the field if it hasn't changed, as specified in https://sequelize.org/master/manual/model-instances.html#change-awareness-of-save, then it should work.

How do I implement, for instance, "group membership" many-to-many in Parse.com REST Cloud Code?

A user can create groups
A group had to have created by a user
A user can belong to multiple groups
A group can have multiple users
I have something like the following:
Parse.Cloud.afterSave('Group', function(request) {
var creator = request.user;
var group = request.object;
var wasGroupCreated = group.existed;
if(wasGroupCreated) {
var hasCreatedRelation = creator.relation('hasCreated');
hasCreatedRelation.add(group);
var isAMemberOfRelation = creator.relation('isMemberOf');
isAMemberOfRelation.add(group);
creator.save();
}
});
Now when I GET user/me with include=isMemberOf,hasCreated, it returns me the user object but with the following:
hasCreated: {
__type: "Relation"
className: "Group"
},
isMemberOf: {
__type: "Relation"
className: "Group"
}
I'd like to have the group objects included in say, 'hasCreated' and 'isMemberOf' arrays. How do I pull that using the REST API?
More in general though, am I approaching this the right way? Thoughts? Help is much appreciated!
First off, existed is a function that returns true or false (in your case the wasGroupCreated variable is always going to be a reference to the function and will tis always evaluate to true). It probably isn't going to return what you expect anyway if you were using it correctly.
I think what you want is the isNew() function, though I would test if this works in the Parse.Cloud.afterSave() method as I haven't tried it there.
As for the second part of your question, you seem to want to use your Relations like Arrays. If you used an array instead (and the size was small enough), then you could just include the Group objects in the query (add include parameter set to isMemberOf for example in your REST query).
If you do want to stick to Relations, realise that you'll need to read up more in the documentation. In particular you'll need to query the Group object using a where expression that has a $relatedTo pointer for the user. To query in this manner, you will probably need a members property on the Group that is a relation to Users.
Something like this in your REST query might work (replace the objectId with the right User of course):
where={"$relatedTo":{"object":{"__type":"Pointer","className":"_User","objectId":"8TOXdXf3tz"},"key":"members"}}

Check if property exists in RavenDB

I want to add property to existing document (using clues form http://ravendb.net/docs/client-api/partial-document-updates). But before adding want to check if that property already exists in my database.
Is any "special,proper ravendB way" to achieve that?
Or just load document and check if this property is null or not?
You can do this using a set based database update. You carry it out using JavaScript, which fortunately is similar enough to C# to make it a pretty painless process for anybody. Here's an example of an update I just ran.
Note: You have to be very careful doing this because errors in your script may have undesired results. For example, in my code CustomId contains something like '1234-1'. In my first iteration of writing the script, I had:
product.Order = parseInt(product.CustomId.split('-'));
Notice I forgot the indexer after split. The result? An error, right? Nope. Order had the value of 12341! It is supposed to be 1. So be careful and be sure to test it thoroughly.
Example:
Job has a Products property (a collection) and I'm adding the new Order property to existing Products.
ravenSession.Advanced.DocumentStore.DatabaseCommands.UpdateByIndex(
"Raven/DocumentsByEntityName",
new IndexQuery { Query = "Tag:Jobs" },
new ScriptedPatchRequest { Script =
#"
this.Products.Map(function(product) {
if(product.Order == undefined)
{
product.Order = parseInt(product.CustomId.split('-')[1]);
}
return product;
});"
}
);
I referenced these pages to build it:
set based ops
partial document updates (in particular the Map section)

Rails3: Cascading Select Writer's Block

I have a big, flat table:
id
product_id
attribute1
attribute2
attribute3
attribute4
Here is how I want users to get to products:
See a list of unique values for attribute1.
Clicking one of those gets you a list of unique values for attribute2.
Clicking one of those gets you a list of unique values for attribute3.
Clicking one of those gets you a list of unique values for attribute4.
Clicking one of those shows you the relevant products.
I have been coding Rails for about 4 years now. I just can't unthink my current approach to this problem.
I have major writer's block. Seems like such an easy problem. But I either code it with 4 different "step" methods in my controller, or I try to write one "search" method that attempts to divine the last level you selected, and all the previous values that you selected.
Both are major YUCK and I keep deleting my work.
What is the most elegant way to do this?
Here is a solution that may be an option. Just off the top of my head and not tested (so there is probably a bit more elegant solution). You could use chained scopes in your model:
class Product < ActiveRecord::Base
scope :with_capacity, lambda { |*args| args.first.nil? ? nil : where(:capacity=>args.first) }
scope :with_weight, lambda { |*args| args.first.nil? ? nil : where(:weight=>args.first) }
scope :with_color, lambda { |*args| args.first.nil? ? nil : where(:color=>args.first) }
scope :with_manufacturer, lambda { |*args| args.first.nil? ? nil : where(:manufacturer=>args.first) }
self.available_attributes(products,attribute)
products.collect{|product| product.send(attribute)}.uniq
end
end
The code above will give you a scope for each attribute. If you pass a parameter to the scope, then it will give you the products with that attribute value. If the argument is nil, then the scope will return the full set (I think ;-). You could keep track of the attributes they are drilling down in in the session with 2 variables (page_attribute and page_attribute_value) in your controller. Then you call the entire chain to get your list of products (if you want to use them on the page). Next you can get the attribute values by passing in the set of products and the attribute name to Product.available_attributes. Note that this method (Product.available_attributes) is a total hack and would be inefficient for a large set of data, so you may want to make this another scope and use :select=>"DISTINCT(your_attribute)" or something more database efficient instead of iterating thru the full set of products as I did in the hack method.
class ProductsController < ApplicationController
def show
session[params[:page_attribute].to_sym] = params[:page_attribute_value]
#products = Product.all.with_capacity(session[:capacity]).with_weight(session[:weight]).with_color(session[:color]).with_manufacturer(session[:manufacturer])
#attr_values = Product.available_attributes(#products,params[:page_attribute])
end
end
Again, I want to warn you that I did not test this code, so its totally possible that some of the syntax is incorrect, but hopefully this will give you a starting point. Holla if you have any questions about my (psuedo) code.