Cannot Return null for non-nullable type: 'String' within Parent (Error while adding a new field in the Schema) - aws-amplify-cli

While using a Lambda function to update using the mutation in GraphQL, the data gets updated but on subscribing it the data shows up as NULL. This Issue happens only when i make a change in the schema by adding a new Field / Column. This issue does not happen when i don't make any changes in the existing schema.
Subscription error after adding new Field to an existing schema
Note: We are using amplify
when i make a change in the schema by adding a new Field / Column and I use a Lambda function to update data using the mutation in GraphQL, the data gets updated and on subscribing it the updated new data must show.
if we try adding a new field to the schema and try updating the new field using a Backend Lambda function using mutation. the new field gets updated. Now Try getting the updated data using Subscription / Query from any Frontend/ Amplify. I think there is a problem with the resolver which is sending the data as NULL.Schema where new field address added

Same issue here: after digging a bit, turns out you need to query all the fields required by the subscription in the mutation that triggers the subscription. (in your case the mutation executed from the lambda function)
Ex: assuming you have a subscription with the following form:
subscription MySubscription {
onSomethingDone {
field1
..
fieldN
}
}
Then you would need to make sure that the mutation in your lambda function queries all fields from field1 to fieldN, i.e with the form :
mutation MyMutation {
createSomething {
field1
..
fieldN
}
}
Hope this helps

Related

Use Postgres generated columns in Sequelize model

I have a table where it's beneficial to generate a pre-calculated value in the database engine rather than in my application code. For this, I'm using Postgres' generated column feature. The SQL is like this:
ALTER TABLE "Items"
ADD "generatedValue" DOUBLE PRECISION GENERATED ALWAYS AS (
LEAST("someCol", "someOtherCol")
) STORED;
This works well, but I'm using Sequelize with this database. I want to find a way to define this column in my model definition, so that Sequelize will query it, not attempt to update a row's value for that column, and ideally will create the column on sync.
class Item extends Sequelize.Model {
static init(sequelize) {
return super.init({
someCol: Sequelize.DOUBLE,
someOtherColl: Sequelize.DOUBLE,
generatedValue: // <<<-- What goes here??
});
}
}
How can I do this with Sequelize?
I can specify the column as a DOUBLE, and Sequelize will read it, but the column won't be created correctly on sync. Perhaps there's some post-sync hook I can use? I was considering afterSync to drop the column and re-add it with my generated value statement, but I would first need to detect that the column wasn't already converted or I would lose my data. (I run sync [without force: true] on every app startup.)
Any thoughts, or alternative ideas would be appreciated.
Until Sequelize supports readOnly fields and the GENERATED datatype, you can get around Sequelize with a custom datatype:
const Item = sequelize.define('Item', {
someCol: { type: DataTypes.DOUBLE },
someOtherCol: { type: DataTypes.DOUBLE },
generatedValue: {
type: 'DOUBLE PRECISION GENERATED ALWAYS AS (LEAST("someCol", "someOtherCol")) STORED',
set() {
throw new Error('generatedValue is read-only')
},
},
})
This will generate the column correctly in postgres when using sync(), and prevent setting the generatedValue in javascript by throwing an Error.
Assuming that sequelize never tries to update the field if it hasn't changed, as specified in https://sequelize.org/master/manual/model-instances.html#change-awareness-of-save, then it should work.

Phalcon working with MySQL routines

I have a MySQL database which has GUID's stored as binary(16) for the primary keys. I'm using a MySQL user defined routine when inserting and selecting to convert the id's to and from GUID's (GUIDToBinary() and BinaryToGUID()).
In order to use my routines in Phalcon, I am setting the 'columns' parameter for the find() and findFirst() model functions which now means i'm working with incomplete objects as the functions return an instance of Phalcon\Mvc\Model\Row.
The docs state when using the columns parameter the following occurs;
Return specific columns instead of the full columns in the model. When
using this option an incomplete object is returned
UserController.php
// Returns Phalcon\Mvc\Model\Row
$incompleteUser = User::find(['columns' => 'BinaryToGUID(id) as id, status, username, password, .....']);
// Create a new user object to update
$user = new User();
// Populate with existing data
$user->assign($incompleteUser->toArray());
// Assign new changes requested by the user
$user->assign($this->request->getPost());
// Update
$user->updateUser();
User.php
public function updateUser()
{
$manager = $this->getModelsManager();
return $manager->executeQuery("UPDATE User SET ..... WHERE id = GUIDToBinary(".$this->getDI()->get('db')->escapeString($this->id).")");
}
Irrespective of the fact that I've explicitly defined an update, an insert is performed due to the Model being in a transient state.
One solution I thought of was to move the binary to GUID conversion into Phalcon by using Model events however I can't find a suitable method for performing the conversion when selecting. Updating/Inserting is possible by using the beforeSave() and beforeUpdate() events. Perhaps I could just have different properties getId() and getIdGuid() within the model but I would prefer to avoid this if possible.
Is there a way to use MySQL user defined routines in Phalcon and hydrate my model so that it remains in a persistent state? Or do i need to go down the raw SQL route for my updates and avoid PHQL?
Thanks in advance for your time.

How to replace relationship field type object IDs with names / titles in KeystoneJS list CSV download / export?

In Keystone admin list view the handy download link exports all list items in a CSV file, however, if some of the fields are of Relationship type, the exported CSV contains Mongo ObjectIDs instead of nmeaningful strings (name, title, etc) which would be useful.
How can one force the ObjectIDs to be mapped / replaced by another field?
Keystone has an undocumented feature that allows you to create your own custom CSV export function. This feature was added back in April (see KeystoneJS Issue #278).
All you need to do is add a method to the schema called toCSV. Keystone will inject any of the following dependencies when specified as arguments to this method.
- req (current express request object)
- user (currently authenticated user)
- row (default row data, as generated without custom toCSV())
- callback (invokes async mode, must be provided last)
You could, for example, use the Mongoose Model.Populate method to replace the Object Ids of any relationship field with whatever data you want.
Assume you have a Post list with an author field of Types.Relationship to another list (let's say User) which has a name field. You could replace the author Object Id with the author's name (from the User list) by doing the following.
Post.schema.methods.toCSV = function(callback) {
var post = this,
rtn = this.toJSON();
this.populate('author', function() {
rtn.author = post.author.name; // <-- author now has data from User list
callback(null, rtn);
});
};
.toCSV() will be called for every document returned with the Model as the context. When used asynchronously (as above) you should return a JSON representation of the new CSV data by passing it as the second argument of the callback. When using it synchronously simply return the updated JSON object.

Titanium android data reflection in table view

Google APIs Android 2.3.1 , data changes in the database tables not reflecting in tableview screens, but if we restart the app, will show changes and also, if we load app.js will show recent changes.
Can anyone help me to fix this??
fireEvent("db_update")
&
addEventListener('db_update', function() { what should be here to update table data })
For updating a database, you need to remove it first, and then install a new one. Or run queries to update the current one.
This is how I update my database, based on a version number I store in the properties:
function updateDatabase(version){
if (version != Ti.App.Properties.getInt('version',0)){
sb.db.remove();
sb.db = Ti.Database.install('/lib/db.sqlite','db');
Ti.App.Properties.setInt('version',version);
}
}
In the app, on start, I run the function with a manual typed version number in it. Every time your database changes you can manually change this number.
updateDatabase(5);
This of course does not reflect a regular update on the content of a table. (when you run a regular query on the database).
In that case, you should rebuild the content of the tableview yourself (you have the code to fill the tableview, rerun that code) or find a way to target the exact row you need to change with an ID for example.
I found the solution,
Ti.App.addEventListener('updatedb', updateData);
here 'updateData' is the function which gets the data for tableview from the databae. At the
bottom of 'updateData' function, set data for table view
i.e) tableview.setData(data);
and then, your insert, update or delete function should trigger 'updatedb' function
i.e)
function deleteData(recId) {
var db = Ti.Database.open(DBNAME);
db.execute(" DELETE FROM tblName WHERE id IN("+ recId+") ");
db.close();
Ti.App.fireEvent('dataupdated');
}
now, after calling deleteData function, your tableview data will show w/o deleted record.

WCF Data Service - update a record instead of inserting it

I'm developing a WCF Data Service with self tracking entities and I want to prevent clients from inserting duplicated content. Whenever they POST data without providing a value for the data key, I have to execute some logic to determine whether that data is already present inside my database or not. I've written a Change interceptor like this:
[ChangeInterceptor("MyEntity")]
public void OnChangeEntity(MyEntity item, UpdateOperations operations){
if (operations == UpdateOperations.Add)
{
// Here I search the database to see if a matching record exists.
// If a record is found, I'd like to use its ID and basically change an insertion
// into an update.
item.EntityID = existingEntityID;
item.MarkAsModified();
}
}
However, this is not working. The existingEntityID is ignored and, as a result, the record is always inserted, never updated. Is it even possible to do? Thanks in advance.
Hooray! I managed to do it.
item.EntityID = existingEntityID;
this.CurrentDataSource.ObjectStateManager.ChangeObjectState(item, EntityState.Modified);
I had to change the object state elsewhere, ie. by calling .ChangeObjectState of the ObjectStateManager, which is a property of the underlying EntityContext. I was mislead by the .MarkAsModified() method which, at this point, I'm not sure what it does.