I have a table where it's beneficial to generate a pre-calculated value in the database engine rather than in my application code. For this, I'm using Postgres' generated column feature. The SQL is like this:
ALTER TABLE "Items"
ADD "generatedValue" DOUBLE PRECISION GENERATED ALWAYS AS (
LEAST("someCol", "someOtherCol")
) STORED;
This works well, but I'm using Sequelize with this database. I want to find a way to define this column in my model definition, so that Sequelize will query it, not attempt to update a row's value for that column, and ideally will create the column on sync.
class Item extends Sequelize.Model {
static init(sequelize) {
return super.init({
someCol: Sequelize.DOUBLE,
someOtherColl: Sequelize.DOUBLE,
generatedValue: // <<<-- What goes here??
});
}
}
How can I do this with Sequelize?
I can specify the column as a DOUBLE, and Sequelize will read it, but the column won't be created correctly on sync. Perhaps there's some post-sync hook I can use? I was considering afterSync to drop the column and re-add it with my generated value statement, but I would first need to detect that the column wasn't already converted or I would lose my data. (I run sync [without force: true] on every app startup.)
Any thoughts, or alternative ideas would be appreciated.
Until Sequelize supports readOnly fields and the GENERATED datatype, you can get around Sequelize with a custom datatype:
const Item = sequelize.define('Item', {
someCol: { type: DataTypes.DOUBLE },
someOtherCol: { type: DataTypes.DOUBLE },
generatedValue: {
type: 'DOUBLE PRECISION GENERATED ALWAYS AS (LEAST("someCol", "someOtherCol")) STORED',
set() {
throw new Error('generatedValue is read-only')
},
},
})
This will generate the column correctly in postgres when using sync(), and prevent setting the generatedValue in javascript by throwing an Error.
Assuming that sequelize never tries to update the field if it hasn't changed, as specified in https://sequelize.org/master/manual/model-instances.html#change-awareness-of-save, then it should work.
Related
While using a Lambda function to update using the mutation in GraphQL, the data gets updated but on subscribing it the data shows up as NULL. This Issue happens only when i make a change in the schema by adding a new Field / Column. This issue does not happen when i don't make any changes in the existing schema.
Subscription error after adding new Field to an existing schema
Note: We are using amplify
when i make a change in the schema by adding a new Field / Column and I use a Lambda function to update data using the mutation in GraphQL, the data gets updated and on subscribing it the updated new data must show.
if we try adding a new field to the schema and try updating the new field using a Backend Lambda function using mutation. the new field gets updated. Now Try getting the updated data using Subscription / Query from any Frontend/ Amplify. I think there is a problem with the resolver which is sending the data as NULL.Schema where new field address added
Same issue here: after digging a bit, turns out you need to query all the fields required by the subscription in the mutation that triggers the subscription. (in your case the mutation executed from the lambda function)
Ex: assuming you have a subscription with the following form:
subscription MySubscription {
onSomethingDone {
field1
..
fieldN
}
}
Then you would need to make sure that the mutation in your lambda function queries all fields from field1 to fieldN, i.e with the form :
mutation MyMutation {
createSomething {
field1
..
fieldN
}
}
Hope this helps
I am using Audit.Net library to log EntityFramework actions into a database (currently everything into one AuditEventLogs table, where the JsonData column stores the data in the following Json format:
{
"EventType":"MyDbContext:test_database",
"StartDate":"2021-06-24T12:11:59.4578873Z",
"EndDate":"2021-06-24T12:11:59.4862278Z",
"Duration":28,
"EntityFrameworkEvent":{
"Database":"test_database",
"Entries":[
{
"Table":"Offices",
"Name":"Office",
"Action":"Update",
"PrimaryKey":{
"Id":"40b5egc7-46ca-429b-86cb-3b0781d360c8"
},
"Changes":[
{
"ColumnName":"Address",
"OriginalValue":"test_address",
"NewValue":"test_address"
},
{
"ColumnName":"Contact",
"OriginalValue":"test_contact",
"NewValue":"test_contact"
},
{
"ColumnName":"Email",
"OriginalValue":"test_email",
"NewValue":"test_email2"
},
{
"ColumnName":"Name",
"OriginalValue":"test_name",
"NewValue":"test_name"
},
{
"ColumnName":"OfficeSector",
"OriginalValue":1,
"NewValue":1
},
{
"ColumnName":"PhoneNumber",
"OriginalValue":"test_phoneNumber",
"NewValue":"test_phoneNumber"
}
],
"ColumnValues":{
"Id":"40b5egc7-46ca-429b-86cb-3b0781d360c8",
"Address":"test_address",
"Contact":"test_contact",
"Email":"test_email2",
"Name":"test_name",
"OfficeSector":1,
"PhoneNumber":"test_phoneNumber"
},
"Valid":true
}
],
"Result":1,
"Success":true
}
}
Me and my team has a main aspect to achieve:
Being able to create a search page where administrators are able to tell
who changed
what did they change
when did the change happen
They can give a time period, to reduce the number of audit records, and the interesting part comes here:
There should be an input text field which should let them search in the values of the "ColumnValues" section.
The problems I encountered:
Even if I map the Json structure into relational rows, I am unable to search in every column, with keeping the genericity.
If I don't map, I could search in the Json string with LIKE mssql function but on the order of a few 100,000 records it takes an eternity for the query to finish so it is probably not the way.
Keeping the genericity would be important, so we don't need to modify the audit search page every time when we create or modify a new entity.
I only know MSSQL, but is it possible that storing the audit logs in a document oriented database like cosmosDB (or anything else, it was just an example) would solve my problem? Or can I reach the desired behaviour using relational database like MSSQL?
Looks like you're asking for an opinion, in that case I would strongly recommend a document oriented DB.
CosmosDB could be a great option since it supports SQL queries.
There is an extension to log to CosmosDB from Audit.NET: Audit.AzureCosmos
A sample query:
SELECT c.EventType, e.Table, e.Action, ch.ColumnName, ch.OriginalValue, ch.NewValue
FROM c
JOIN e IN c.EntityFrameworkEvent.Entries
JOIN ch IN e.Changes
WHERE ch.ColumnName = "Address" AND ch.OriginalValue = "test_address"
Here is a nice post with lot of examples of complex SQL queries on CosmosDB
is it possible to add and delete column to my existing database using the controller? is it possible not to use the migration? and how do my model automatically picked up the new column which is create and automatically put in inside fillable? anyone has idea on how to approach this type of situation,if you could point me into a tutorial that would be so cool.
Reason: i have a table with the student mark-book points breakdown column example: [Exam, Homework,Quiz etc..] then every term or year we will remove it or changed it or add more so that's why i need to something like dynamic approach on this matter. where anytime i can change the column or add new column.
Same way the migrations do it, use the Schema builder class. For example:
$newColumnType = 'string';
$newColumnName = 'my_new_column';
Schema::table('my_table', function (Blueprint $table) use ($newColumnType, $newColumnName) {
$table->$newColumnType($newColumnName);
});
You probably should use $guarded = ['id', 'foo', 'bar'] in your model instead of fillable if you're going to be adding columns.
I have a MySQL database which has GUID's stored as binary(16) for the primary keys. I'm using a MySQL user defined routine when inserting and selecting to convert the id's to and from GUID's (GUIDToBinary() and BinaryToGUID()).
In order to use my routines in Phalcon, I am setting the 'columns' parameter for the find() and findFirst() model functions which now means i'm working with incomplete objects as the functions return an instance of Phalcon\Mvc\Model\Row.
The docs state when using the columns parameter the following occurs;
Return specific columns instead of the full columns in the model. When
using this option an incomplete object is returned
UserController.php
// Returns Phalcon\Mvc\Model\Row
$incompleteUser = User::find(['columns' => 'BinaryToGUID(id) as id, status, username, password, .....']);
// Create a new user object to update
$user = new User();
// Populate with existing data
$user->assign($incompleteUser->toArray());
// Assign new changes requested by the user
$user->assign($this->request->getPost());
// Update
$user->updateUser();
User.php
public function updateUser()
{
$manager = $this->getModelsManager();
return $manager->executeQuery("UPDATE User SET ..... WHERE id = GUIDToBinary(".$this->getDI()->get('db')->escapeString($this->id).")");
}
Irrespective of the fact that I've explicitly defined an update, an insert is performed due to the Model being in a transient state.
One solution I thought of was to move the binary to GUID conversion into Phalcon by using Model events however I can't find a suitable method for performing the conversion when selecting. Updating/Inserting is possible by using the beforeSave() and beforeUpdate() events. Perhaps I could just have different properties getId() and getIdGuid() within the model but I would prefer to avoid this if possible.
Is there a way to use MySQL user defined routines in Phalcon and hydrate my model so that it remains in a persistent state? Or do i need to go down the raw SQL route for my updates and avoid PHQL?
Thanks in advance for your time.
I have an entity with a sequence attribute, which is an integer from 1-N for N members of the list. They are polyline points.
I want to be able to insert into the list at a given sequence point, and increment all the items at that point or beyond in the sequence to make room for the new item, and likewise if I delete then decrement everything above so we still have nice sequence ordering with no missing numbers.
There is a REST interface in this of course, but I dont want to hack about with that, I just want sequelize to magically manage this sequence number.
I am assuming I need to get hold of some "before insert" and "after delete" hooks in sequelize and issue some SQL to make this happen. Is that assumption correct or is there some cooler way of doing it.
I havent tested this, but this appears to be the solution, which is barely worth comment.
I know the modelName, and name==the attribute name,
options.hooks={
beforeInsert: function(record, options) {
return self.models[modelName].incrementAfter(name,record[name]);
},
afterDelete: function(record, options) {
return self.models[modelName].decrementAfter(name,record[name]);
}
}
and then added to my extended model prototype I have
incrementAfter:function(field,position){
return this.sequelize.query("UPDATE "+this.tableName+" SET "+field+" = "+field+"+1 WHERE "+field +" >= "+position);
},
decrementAfter:function(field,position){
return this.sequelize.query("UPDATE "+this.tableName+" SET "+field+" = "+field+"-1 WHERE "+field +" >= "+position);
},