I am implementing sequelize into my NodeJS application. Before this, I was using a written INSERT query that used ON CONFLICT (field) DO NOTHING to handle not inserting records where a value needed to be unique.
const sql = 'INSERT INTO communications (firstname, lastname, age, department, campus, state, message_uuid) VALUES ($1, $2, $3, $4, $5, $6, $7) ON CONFLICT (message_uuid) DO NOTHING';
const values = [val.firstName, val.lastName, val.age, val.department, val.campus, val.state, message_uuid];
Is there support for this in sequelize where I can define the same thing within a model? Or perhaps a better way to handle it?
Essentially, if a record already exists in the table in the column with message_uuid = 123 and another record try's to insert that has that same value, it ignores it and does nothing.
You can use public static bulkCreate(records: Array, options: Object): Promise<Array> method with options.ignoreDuplicates.
Ignore duplicate values for primary keys? (not supported by MSSQL or Postgres < 9.5)
Besides, it's important to add a unique constraint for the message_uuid field on the model. So that the query will use ON CONFLICT DO NOTHING clause of Postgres.
For example, "sequelize": "^5.21.3" and postgres:9.6:
import { sequelize } from '../../db';
import { Model, DataTypes } from 'sequelize';
class Communication extends Model {}
Communication.init(
{
firstname: DataTypes.STRING,
lastname: DataTypes.STRING,
age: DataTypes.INTEGER,
message_uuid: {
type: DataTypes.INTEGER,
unique: true,
},
},
{ sequelize, tableName: 'communications' },
);
(async function test() {
try {
await sequelize.sync({ force: true });
// seed
await Communication.create({ firstname: 'teresa', lastname: 'teng', age: 32, message_uuid: 123 });
// test
await Communication.bulkCreate([{ firstname: 'teresa', lastname: 'teng', age: 32, message_uuid: 123 }], {
ignoreDuplicates: true,
});
} catch (error) {
console.log(error);
} finally {
await sequelize.close();
}
})();
Execution result:
Executing (default): DROP TABLE IF EXISTS "communications" CASCADE;
Executing (default): DROP TABLE IF EXISTS "communications" CASCADE;
Executing (default): CREATE TABLE IF NOT EXISTS "communications" ("id" SERIAL , "firstname" VARCHAR(255), "lastname" VARCHAR(255), "age" INTEGER, "message_uuid" INTEGER UNIQUE, PRIMARY KEY ("id"));
Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'communications' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname;
Executing (default): INSERT INTO "communications" ("id","firstname","lastname","age","message_uuid") VALUES (DEFAULT,$1,$2,$3,$4) RETURNING *;
Executing (default): INSERT INTO "communications" ("id","firstname","lastname","age","message_uuid") VALUES (DEFAULT,'teresa','teng',32,123) ON CONFLICT DO NOTHING RETURNING *;
Check the database, there is only one row as expected.
node-sequelize-examples=# select * from communications;
id | firstname | lastname | age | message_uuid
----+-----------+----------+-----+--------------
1 | teresa | teng | 32 | 123
(1 row)
see the new UPSERT feature in Sequelize v6:
https://sequelize.org/api/v6/class/src/model.js~model#static-method-upsert
Implementation details:
MySQL - Implemented with ON DUPLICATE KEY UPDATE`
PostgreSQL - Implemented with ON CONFLICT DO UPDATE.
If update data contains PK field, then PK is selected as the default conflict
key. Otherwise first unique constraint/index will be selected, which
can satisfy conflict key requirements.
SQLite - Implemented with ON CONFLICT DO UPDATE
MSSQL - Implemented as a single query using MERGE and WHEN (NOT) MATCHED THEN
as second argument of Model.create you can provide onConflict prop, please read the documentation
Related
I have a PostgreSQL table which has a JSONB filed. The table can be created by
create table mytable
(
id uuid primary key default gen_random_uuid(),
data jsonb not null,
);
insert into mytable (data)
values ('{
"user_roles": {
"0x101": [
"admin"
],
"0x102": [
"employee",
"customer"
]
}
}
'::json);
In above example, I am using "0x101", "0x102" to present two UIDs. In reality, it has more UIDs.
I am using jackc/pgx to read that JSONB field.
Here is my code
import (
"context"
"fmt"
"github.com/jackc/pgx/v4/pgxpool"
)
type Data struct {
UserRoles struct {
UID []string `json:"uid,omitempty"`
// ^ Above does not work because there is no fixed field called "uid".
// Instead they are "0x101", "0x102", ...
} `json:"user_roles,omitempty"`
}
type MyTable struct {
ID string
Data Data
}
pg, err := pgxpool.Connect(context.Background(), databaseURL)
sql := "SELECT data FROM mytable"
myTable := new(MyTable)
err = pg.QueryRow(context.Background(), sql).Scan(&myTable.Data)
fmt.Printf("%v", myTable.Data)
As the comment inside mentions, the above code does not work.
How to present dynamic keys in a type struct or how to return all JSONB field data? Thanks!
edit your Data struct as follows,
type Data struct {
UserRoles map[string][]string `json:"user_roles,omitempty"`
}
you can also use a uuid type as the map's key type if you are using a package like https://github.com/google/uuid for uuids.
However please note that this way if you have more than one entry in the json object user_roles for a particular user(with the same uuid), only one will be fetched.
I'm strugling with something that maybe is pretty simple.
I'm using a postgrès SQL with sequelize and typescript.
what I'm trying to do is to create two things and one as Reference on the other but if the creation of one fail then I don't want to commit anythigs.
This is my code where I'm trying to create someone and assign hime some shoes.
CREATE TABLE User
(
id BIGSERIAL PRIMARY KEY,
firstname TEXT,
lastName TEXT
);
CREATE TABLE Shoes
(
id BIGSERIAL PRIMARY KEY,
size INTEGER NOT NULL,
idUser BIGINT REFERENCES User(id) NOT NULL
);
async function operations() {
const t = await sequelize.transaction();
try {
await User.create({
firstName: 'Bart',
lastName: 'Simpson'
}, { transaction: t });
await Shoes.create({
idUser: // here I want the id of my futur new creation (bart simpson)
size: 43
}, { transaction: t });
await t.commit();
} catch (error) {
await t.rollback();
}
}
operations.then(() => {/*do something*/})
the thing is, I don't know how to get the futur Id of my new user and if I'm putting something hard like 1 if the database is empty or if I get the latest id user and I'm adding 1 then I get an error violates foreign key constraint.
I think it's because the user isn't existing in the database but it exist in the transaction.
If someone could help me :)
I fact sending a transaction in a get can also return the value that will be created in the transaction so just need to use get and send the exact same transaction inside the methode
I am trying to create an index with multiple fields, one of the field is a foriegn key to another table. However i get the following error:
Error: Index "player_id_UNIQUE" contains column that is missing in the
entity (Earning): player_id
Given that player_id is a foriegn key that im joining how do i handle this
import { Column, Entity, Index, JoinColumn, ManyToOne, PrimaryColumn } from "typeorm";
import { PersonPlayer } from "./PersonPlayer";
import { Team } from "./Team";
#Entity()
#Index("player_id_UNIQUE", ["player_id", "period", "year"], { unique: true })
export class Earning {
#PrimaryColumn({length: 36})
id: string;
#Column({nullable: true})
year: number;
#Column({type: 'decimal', nullable: true})
amount: number;
#Column({nullable: true, length: 45})
period: string;
#ManyToOne(() => Team, {nullable: true})
#JoinColumn({name: 'team_id'})
team: Team;
#ManyToOne(() => PersonPlayer, {nullable: true})
#JoinColumn({name: 'player_id'})
player: PersonPlayer;
#Column({nullable: true, length: 45})
dtype: string;
}
When i generate this entity and create the sql table (without the index) i see player_id as one of the columns. But it appears that typeorm is not able to recognize right now with the index that player_id exists in the entity through the joincolumn relationship.
This is completely undocumented so it took some playing around until I stumbled upon the correct syntax. You can actually use sub-properties in index definitions:
#Index("player_id_UNIQUE", ["player.id", "period", "year"], { unique: true })
That way, player.id is automatically mapped to player_id in the resulting SQL:
CREATE UNIQUE INDEX "player_id_UNIQUE" ON "user_earning" ("player_id", "period", "year")
You can explicitly declare player_id in Earning entity. The only change you need to make is to add
#Column()
player_id: number
before player definition.
This way TypeORM recognizes player_id as a valid column which you can use in #Index or #Unique definition.
It is documented behaviour: https://typeorm.io/#/relations-faq/how-to-use-relation-id-without-joining-relation
#Index(["player.id", "period", "year"])
Or Just do this ! 🥳
I want create migration with Sequelize to rename column with camelCase to have a database with column in snake_case.
I use Sequelize to create migration and use migration.
module.exports = {
up: function(queryInterface, Sequelize) {
return queryInterface.renameColumn('my_some_table', 'totoId', 'toto_id');
},
down: function(queryInterface, Sequelize) {
//
}
};
But... I have a unique constraint on this column (totoId) and name column, named my_some_table_name_totoId_uindex, and I also have an index on this column (totoId).
How I can force renaming column who have a unique constraint and one index?
You have to drop all the constraints, rename the column and then add the constraints back. With a single constraint on totoId it would look something like this:
// 1) drop constraint
queryInterface.removeConstraint('my_some_table', 'my_constraint');
// 2) rename column
queryInterface.renameColumn('my_some_table', 'totoId', 'toto_id');
// 3) add constraint back
queryInterface.addConstraint('my_some_table', ['toto_id'], {
type: 'unique',
name: 'my_constraint'
});
Remember that migrations should be atomic operations. So you should create 3 migrations in that order. Or even better, as #Santilli pointed out in the comments, you could create a transaction.
This will prevent from any change to be applied if one of the queries fails:
return queryInterface.sequelize.transaction(async (transaction) => {
await queryInterface.removeConstraint("my_some_table", "my_constraint", {
transaction,
});
await queryInterface.renameColumn("my_some_table", "totoId", "toto_id", {
transaction,
});
await queryInterface.addConstraint("my_some_table", ["toto_id"], {
type: "unique",
name: "my_constraint",
transaction,
});
});
Also, remember to create a transaction to revert the changes in the down function.
As per this link:
Supported Operations on DynamoDB
"You can query only tables that have a composite primary key (partition key and sort key)."
This doesn't seem correct though. I have a table in DynamoDB called 'users' which has a Primary Key that consists of only one attribute 'username'.
And I'm able to query this table just fine in NodeJS using only a 'KeyConditionExpression' on the attribute 'username'. Please see below:
var getUserByUsername = function (username, callback) {
var dynamodbDoc = new AWS.DynamoDB.DocumentClient();
var params = {
TableName: "users",
KeyConditionExpression: "username = :username",
ExpressionAttributeValues: {
":username": username
}
};
dynamodbDoc.query(params, function (err, data) {
if (err) {
console.error("Unable to query. Error:", JSON.stringify(err, null, 2));
callback(err, null);
} else {
console.log("DynamoDB Query succeeded.");
callback(null, data);
}
});
}
This code works just fine. So I'm wondering if the documentation is incorrect or am I missing something?
The documentation is correct.
"Partition Key and Sort Key – A composite primary key, composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key value as input to an internal hash function"
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DataModel.html
If a table doesn't have a sort key (range attribute), then the composite key is built from the hash key only. One of the results of that is that items won't be sorted as you like (items are sorted by sort key)