How to create a unique index containing multiple fields where one is a foreign key - sql

I am trying to create an index with multiple fields, one of the field is a foriegn key to another table. However i get the following error:
Error: Index "player_id_UNIQUE" contains column that is missing in the
entity (Earning): player_id
Given that player_id is a foriegn key that im joining how do i handle this
import { Column, Entity, Index, JoinColumn, ManyToOne, PrimaryColumn } from "typeorm";
import { PersonPlayer } from "./PersonPlayer";
import { Team } from "./Team";
#Entity()
#Index("player_id_UNIQUE", ["player_id", "period", "year"], { unique: true })
export class Earning {
#PrimaryColumn({length: 36})
id: string;
#Column({nullable: true})
year: number;
#Column({type: 'decimal', nullable: true})
amount: number;
#Column({nullable: true, length: 45})
period: string;
#ManyToOne(() => Team, {nullable: true})
#JoinColumn({name: 'team_id'})
team: Team;
#ManyToOne(() => PersonPlayer, {nullable: true})
#JoinColumn({name: 'player_id'})
player: PersonPlayer;
#Column({nullable: true, length: 45})
dtype: string;
}
When i generate this entity and create the sql table (without the index) i see player_id as one of the columns. But it appears that typeorm is not able to recognize right now with the index that player_id exists in the entity through the joincolumn relationship.

This is completely undocumented so it took some playing around until I stumbled upon the correct syntax. You can actually use sub-properties in index definitions:
#Index("player_id_UNIQUE", ["player.id", "period", "year"], { unique: true })
That way, player.id is automatically mapped to player_id in the resulting SQL:
CREATE UNIQUE INDEX "player_id_UNIQUE" ON "user_earning" ("player_id", "period", "year")

You can explicitly declare player_id in Earning entity. The only change you need to make is to add
#Column()
player_id: number
before player definition.
This way TypeORM recognizes player_id as a valid column which you can use in #Index or #Unique definition.
It is documented behaviour: https://typeorm.io/#/relations-faq/how-to-use-relation-id-without-joining-relation

#Index(["player.id", "period", "year"])
Or Just do this ! 🥳

Related

Prisma nested recursive relations depth

I must query a group and all of its subgroups from the same model.
However, when fetching from Group table as shown below, Prisma doesn't include more than a 1-depth to the resulting Subgroups relation (subgroups of subgroups being left out). Subgroups attribute holds an array whose elements are of same type as the said model (recursive).
model Group {
id Int #id #default(autoincrement())
parentId Int?
Parent Group? #relation("parentId", fields: [parentId], references: [id])
Subgroups Group[] #relation("parentId")
}
GroupModel.findFirst({
where: { id: _id },
include: { Subgroups: true }
});
I guess this might be some sort of safeguard to avoid infinite recursive models when generating results. Is there any way of dodging this limitation (if it's one), and if so, how?
Thanks
You can query more than 1-depth nested subgroups by nesting include like so:
GroupModel.findFirst({
where: { id: _id },
include: { Subgroups: { include: { Subgroups: { include: Subgroups: { // and so on... } } } } }
});
But, as mentioned by #TasinIshmam, something like includeRecursive is not supported by Prisma at the moment.
The workaround would be to use $queryRaw (https://www.prisma.io/docs/concepts/components/prisma-client/raw-database-access#queryraw) together with SQL recursive queries (https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-RECURSIVE)

TypeORM cannot delete row with ManyToOne / OneToMany relation

I have this problem right now that I don't know how to fix honestly. I spent hours on this already and cannot find the solution. I am using MS-SQL on Azure.
The way I have set up my entities is the following:
Customer and Visits: OneToMany (Primary)
Visits and Customers: ManyToOne (Inverse)
I am soft-deleting my customers, so that the information for the visits can be retrieved regardless of whether or not the user wants to see the customer data specifically. The data is still getting resolved correctly using the relationship. That's also why I don't want to use "Cascade DELETE" here.
However, since I want to delete visits completely (not soft-deleting like the customers) I am facing issues probably regarding foreign key constraints (not sure, because I don't get any error output from TypeORM). The DeleteResult.affected property however returns 0, which is what I see in my DataGrip queries as well, where I check the actual table data.
Whats important as well is that I am able to manually delete the row using a simple SQL statement like the following:
DELETE FROM visits
WHERE uuid = 'f0ea300d-...-656a'
My entities are set up like this (left unimportant information out):
#Entity({ name: 'customers' })
export class Customer {
#PrimaryColumn()
uuid: string
#OneToMany(() => Visit, (visit) => visit.customer)
visits?: Visit[]
}
#Entity({ name: 'visits' })
export class Visit {
#PrimaryColumn()
uuid: string
#ManyToOne(() => Customer, (customer) => customer.visits)
customer: Customer
}
My GraphQL resolver:
#Mutation(() => Boolean)
async deleteVisitsByUuid(
#Arg('uuid') uuid: string,
#Ctx() { conn }: AppContext,
): Promise<boolean> {
const repo = conn.getRepository(Customer)
const result = await repo.delete(uuid)
const affected = result.affected
if (affected === undefined || affected == null) {
return false
} else {
return affected > 0
}
}
The problem was conn.getRepository(Customer). I have replaced it with conn.getRepository(Visit).

Many to many with pivot data to dgraph using graphql schema

I have the bellow many to many relation using a relational DB and I want to transition this to the dgraph DB.
This relation has also extra columns in the pivot table: products_stores like price, disc_price.
I have the bellow dgraph schema using graphql:
type Product {
id: ID!
name: String! #id
slug: String! #id
image: String
created_at: DateTime!
updated_at: DateTime!
stores: [Store] #hasInverse(field: products)
}
type Store {
id: ID!
name: String! #id
logo: String
products: [Product] #hasInverse(field: stores)
created_at: DateTime!
updated_at: DateTime!
}
I am newbie to graph databases and I don't know how to define these extra pivot columns.
Any help would be greatly appreciated.
To model a pivot table that is only a linking pivot table holding no additional information, then you model it as you did above. However, if your pivot table contains additional information regarding the relationship, then you will need to model it with an intermediate linking type. Almost the same idea as above. I prefer these linking types to have a name describing the link. For instance I named it in this case Stock but that name could be anything you want it to be. I also prefer camelCase for field names so my example reflects this preference as well. (I added some search directives too)
type Product {
id: ID!
name: String! #id
slug: String! #id
image: String
createdAt: DateTime! #search
updatedAt: DateTime! #search
stock: [Stock] #hasInverse(field: product)
}
type Store {
id: ID!
name: String! #id
logo: String
stock: [Stock] #hasInverse(field: store)
createdAt: DateTime! #search
updatedAt: DateTime! #search
}
type Stock {
id: ID!
store: Store!
product: Product!
name: String! #id
price: Float! #search
originLink: String
discPrice: Float #search
}
The hasInverse directive is only required on one edge of the inverse relationship, if you want to for readability you can define it on both ends without any side effects
This model allows you to query many common use cases very simply without needing to do additional join statements like you are probably use to in sql. And the best part about Dgraph is that all of these queries and mutations are generated for you so you don't have to write any resolvers! Here is one example of finding all the items in a store between a certain price range:
query ($storeName: String, $minPrice: Float!, $maxPrice: Float!) {
getStore(name: $storeName) {
id
name
stock(filter: { price: { between: { min: $minPrice, max: $maxPrice } } }) {
id
name
price
product {
id
name
slug
image
}
}
}
}
For a query to find only specific product names in a specific store, then use the cascade directive to remove the undesired Stock nodes (until Dgraph finished nested filters RFC in progress)
query ($storeName: String, $productIDs: [ID!]!) {
getStore(name: $storeName) {
id
name
stock #cascade(fields:["product"]) {
id
name
price
product(filter: { id: $productIDs }) #cascade(fields:["id"]) {
id
name
slug
image
}
}
}
}

Can I update a FaunaDB document without knowing its ID?

FaunaDB's documentation covers how to update a document, but their example assumes that I'll have the id to pass into Ref:
Ref(schema_ref, id)
client.query(
q.Update(
q.Ref(q.Collection('posts'), '192903209792046592'),
{ data: { text: "Example" },
)
)
However, I'm wondering if it's possible to update a document without knowing its id. For instance, if I have a collection of users, can I find a user by their email, and then update their record? I've tried this, but Fauna returns a 400 (Database Ref expected, String provided):
client
.query(
q.Update(
q.Match(
q.Index("users_by_email", "me#example.com")
),
{ name: "Em" }
)
)
Although Bens comments are correct, (that's the way you do it), I wanted to note that the error you are receiving is because you are missing a bracket here: "users_by_email"), "me#example.com"
The error is logical if you know that Index takes an optional database reference as second argument.
To clarify what Ben said:
If you do this you'll get another error:
Update(
Match(
Index("accounts_by_email"), "test#test.com"
),
{ data: { email: "test2#test.com"} }
)
Since Match could potentially return more then one element. It returns a set of references called a SetRef. Think of setrefs as lists that are not materialized yet. If you are certain there is only one match for that e-mail (e.g. if you set a uniqueness constraint) you can materialize it using Paginate or Get:
Get:
Update(
Select(['ref'], Get(Match(
Index("accounts_by_email"), "test#test.com"
))),
{ data: { email: 'test2#test.com'} }
)
The Get returns the complete document, we need to specify that we require the ref with Select(['ref']..
Paginate:
Update(
Select(['data', 0],
Paginate(Match(
Index("accounts_by_email"), "test#test.com"
))
),
{ data: { email: "testchanged#test.com"} }
)
You are very close! Update does require a ref. You can get one via your index though. Assuming your index has a default values setting (i.e. paging a match returns a page of refs) and you are confident that the there is a single match or the first match is the one you want then you can do Select(["ref"], Get(Match(Index("users_by_email"), "me#example.com"))) to transform your set ref to a document ref. This can then be passed into update (or to any other function that wants a document ref, like Delete).

Querying (and filtering) in a many-to-many relationship in Backand

I'm trying to use the pet-owner example to create some sort of playlist app where a playlist can be shared among different users.
I have read both links to understand how many-to-many relationship is created in Backand:
Link 1 -
Link 2
According to pet's example, to get all owners from one pet I should get the pet object (using its id field) and then filter its user_pets list matching the user id. That may work for small amount of users/pets but I'd rather prefer to query user_pets table directly by filtering by user_id and pet_id.
My approach has been this code without success:
$http({
method: 'GET',
url: getUrl(), // this maps to pets_owner "table"
params: {
deep: true,
exclude: 'metadata',
filter: [
{ fieldName: 'pet', operator: 'equals', value: pet_id },
{ fieldName: 'owner', operator: 'equals', value: user_id }
]
}
})
Any idea how to query/filter to get only related results?
Thanks in advance
Because user_id and pet_d are both object fields the operator should be "in"
From Backand docs :
following are the possible operators depending on the field type:
numeric or date fields:
-- equals
....
object fields:
-- in