Order of columns in the table created does not have 'id' first, despite id being the first field in the SQLModel - sqlmodel

I am creating a table using the following code:
from sqlmodel import Field, SQLModel, JSON, Column, Time
class MyTable(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str
type: str
slug: str = Field(index=True, unique=True)
resource_data: dict | None = Field(default=None, sa_column=Column(JSON)) # type: ignore
# ... create engine
SQLModel.metadata.create_all(engine)
The CREATE table script generated for the model above ends up putting the resource_data column above everything else, instead of preserving the natural order of 'id' first
CREATE TABLE mytable (
resource_data JSON, <----- why is this the FIRST column created?
id SERIAL NOT NULL,
name VARCHAR NOT NULL,
type VARCHAR NOT NULL,
slug VARCHAR NOT NULL,
PRIMARY KEY (id)
)
This feels unusual when I inspect my postgresql tables in a db tool like pgAdmin.
How do I ensure the table is created with the 'natural' order of the declarative model, that is 'id' first?

Related

SQL Table with mixed data type field Best Practice

everyone,
I would like an advice on best practice for creating realtional database structure with field having mixed data type.
I have 'datasets' (some business objects) and I would like to have list of parameters, associated with each dataset. And those parameters can have different types - strings, integers, float and json values.
What would be the best structure for the parameters table? Should I have single column with string type?
CREATE TABLE param_desc (
id serial PRIMARY KEY,
name varchar NOT NULL,
param_type int -- varchar, int, real, json
);
CREATE TABLE param_value (
id serial PRIMARY KEY,
dataset_id int NOT NULL,
param int NOT NULL REFERENCES param_desc (id),
value varchar NOT NULL,
CONSTRAINT _param_object_id_param_name_id_time_from_key UNIQUE (dataset_id, param)
);
The problem with such approach is that I can't easily cast value for some additional conditions. For example, I want to get all datasets with some specific integer parameter, having int value more than 10. But if I write where clause, the casting will return error, as other non-integer parameters can't be casted.
SELECT dataset_id FROM vw_param_current WHERE name = 'priority' AND value::int > 5
Or should I have 4 separate columns, with 3 of them being NULL for every row?
Or should I have 4 different tables?

How to update the nested tables in sql using gorm?

Here the code is written in Go. I am using two tables where one table has a foreign key that refers to the other table's primary key. Let's say I have a database as following struct defined:
type User struct{
ID uint `gorm:"primary_key;column:id"`
Name string `gorm:"column:name"`
Place place
PlaceID
}
type Place struct{
ID uint `gorm:"primary_key;column:id"`
Name string `gorm:"column:name"`
Pincode uint `gorm:"column:pincode"`
}
And the sql schema is:
create table place(
id int(20) NOT NULL AUTO_INCREMENT,
name varchar(100) NOT NULL,
pincode uint(20) NOT NULL,
PRIMARY KEY (id),
)
create table user(
id int(20) NOT NULL AUTO_INCREMENT,
name varchar(100) NOT NULL,
place_id uint(20) NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (place_id) REFERENCES place(id)
)
Now while inserting in user by gorm as:
place := Place{Name:"new delhi",Pincode:1234}
user := User{Name: "sam", Age: 15, Place: place}
err = db.Debug().Create(&user).Error
//It inserts to both user and place table in mysql
//now while updating to name in user table as Samuel and place as
//following
place := Place{Name:"mumbai",Pincode:1234}
err = db.Debug().Model(&User{}).Where("id =?",
1,).Update(&user{Name:"Samuel",Place:place}).Error
It updates the row in user table but creates a new row in place table.But it should update the matching row in place table and not create a new one
Is there any way to do it? Here I am not using auto migrate function to create db tables.
The answer to your question should be sought in a relations or Association Mode.
The example below shows how to add new associations for many to many, has many, replace current associations for has one, belongs to
db.Model(&user).Association("Place").Append(Place{Name:"mumbai",Pincode:1234})
Or you can replace current associations with new ones:
db.Model(&user).Association("Place").Replace(Place{Name:"mumbai",Pincode:1234},Place{Name:"new delhi",Pincode:1234})
Probably It's creating a new row because you didn't set the ID on Place{Name:"mumbai",Pincode:1234}.

How to use RODBC to save dataframe to table with primary key generated at database

I would like to enter a data frame into an existing table in a database using an R script, and I want the table in the database to have a sequential primary key. My problem is that RODBC doesn't seem to allow the primary key constraint.
Here's the SQL for creating the table I want:
CREATE TABLE [dbo].[results] (
[ID] INT IDENTITY (1, 1) NOT NULL,
[FirstName] VARCHAR (255) NULL,
[LastName] VARCHAR (255) NULL,
[Birthday] DATETIME NULL,
[CreateDate] DATETIME NULL,
CONSTRAINT [PK_dbo.results] PRIMARY KEY CLUSTERED ([ID] ASC)
);
And a test with some R code:
ConnectionString1="Driver=ODBC Driver 11 for SQL Server;Server=myserver; Database=TestDb; trusted_connection=yes"
ConnectionString2="Driver=ODBC Driver 11 for SQL Server;Server=notmyserver; Database=TestDb; trusted_connection=yes"
db1=odbcDriverConnect(ConnectionString1)
query="SELECT a.[firstname] as FirstName
, a.[lastname] as LastName
, Cast(a.[dob] as datetime) as Birthday
, cast(a.createDate as datetime) as CreateDate
FROM [dbo].[People] a"
results=NULL
results=sqlQuery(db1,query,stringsAsFactors=FALSE)
close(db1)
db2=odbcDriverConnect(ConnectionString)
sqlSave(db2,
results,
append = TRUE,
varTypes=c(Birthday="datetime", CreateDate="datetime"),
colnames = FALSE,
rownames = FALSE,fast=FALSE)
close(db2)
The first part of the R code is just getting some test data into a dataframe--it works fine and it's not part of my question here (I'm just including it here so you can see what format the test data is). When I run the sqlSave function I get an error message:
Error in dimnames(x) <- dn :
length of 'dimnames' [2] not equal to array extent
However, if I remove the primary key from the database, everything works fine with this table:
CREATE TABLE [dbo].[results] (
[FirstName] VARCHAR (255) NULL,
[LastName] VARCHAR (255) NULL,
[Birthday] DATETIME NULL,
[CreateDate] DATETIME NULL
);
Clearly the primary key is the issue. Normally with entity framework or whatever (as I understand it), the primary key is created at the database when you enter data.
I'd like a way to append data to a table with a primary key using only an R script. Is that possible? There could already be data in the table I'm adding to, so I don't really see a way to create keys in R before trying to append to the table.
The problem is line 361 in http://github.com/cran/RODBC/blob/master/R/sql.R - the data.frame and the DB table must have exactly the same number of columns otherwise you get this error with this stacktrace:
Error in dimnames(x) <- dn :
length of 'dimnames' [2] not equal to array extent
3. `colnames<-`(`*tmp*`, value = c("ID", "FirstName", "LastName",
"Birthday", "CreateDate")) at sql.R#361
2. sqlwrite(channel, tablename, dat, verbose = verbose, fast = fast,
test = test, nastring = nastring) at sql.R#211
1. sqlSave(db2, results, append = TRUE, varTypes = c(Birthday = "datetime",
CreateDate = "datetime"), colnames = FALSE, rownames = FALSE,
fast = FALSE, verbose = TRUE)
If you add the ID column to your data.frame you can no longer use the autoinc ID column so this is no solution (or workaround).
A "simple" workaround to the "same columns" limitation of RODBC::sqlSave is:
Use sqlSave to save the new rows into another table name
Send an insert into ... select from ... via RODBC::sqlQuery to append the new rows to your original table that includes the autoinc ID
column
Delete the table with the new rows again (drop table...)
A better option would be to use the new odbc package which also offers better performance through bulk-alike inserts instead of sending single insert statements like RODBC does:
https://github.com/r-dbi/odbc
Look for the function dbWriteTable (which is an implementation of the interface DBI::dbWriteTable).

First DB - How to structure required information

I watched a few youtube videos about how to structure a database using tables and fields. I am a bit confused about how to strucuture my information.
I have put my attempt below:
// Identifier Table
// This is where we give each item a new unique identifier
UniqueID []
// Item Table
// This is where the main content goes which is displayed
UniqueID []
Title []
Description []
Date []
Location []
Coordinates []
Source []
Link []
// Misc Table
// This is additional useful information, but not displayed
geocoded []
country name []
By separating out the uniqueID when I delete a record I can make sure that new records still have a unique incrementing ID. Can I get some feedback on how I divided up my data into three tables.
you gave us no hint what you want to represent in your db.
For example: if location and coordinate describe a building or maybe room, than it could be useful to save that information in an extra table and have a relationship from item to it, as this would allow to easily fetch all items connected with on place.
Of course you should apply the same principle for country: a locations lays with-in a country.
BEGIN;
CREATE TABLE "country" (
"id" integer NOT NULL PRIMARY KEY,
"name" varchar(255) NOT NULL
)
;
CREATE TABLE "location" (
"id" integer NOT NULL PRIMARY KEY,
"name" varchar(255) NOT NULL,
"coordinate" varchar(255) NOT NULL,
"country_id" integer NOT NULL REFERENCES "country" ("id")
)
;
CREATE TABLE "item" (
"id" integer NOT NULL PRIMARY KEY,
"title" varchar(25) NOT NULL,
"description" text NOT NULL,
"date" datetime NOT NULL,
"source" varchar(255) NOT NULL,
"link" varchar(255) NOT NULL,
"location_id" integer NOT NULL REFERENCES "location" ("id")
)
;
In the case stated above I would pack everything into one table since there is not enugh complexity to benfit from spliting the data into diferent tables.
When you have more metadata you can split it up into:
Item (For display data)
ItemMeta (For meta data)

Django, UserProfile inheritance nightmare?

I'm using both django-userena and django-facebook as my main registration apps.
Let's inherit my own UserProlfile from both of them:
from userena.models import UserenaBaseProfile
from django_facebook.models import FacebookProfileModel
class UserProfile(UserenaBaseProfile , FacebookProfileModel):
user = models.OneToOneField(User, unique=True, verbose_name=_('user'), related_name='user_profile')
department = models.ForeignKey('Department' , null = True , blank = True , related_name=_('user'))
name = models.CharField(max_length = 100)
birthday = models.DateField()
def __unicode__(self):
return self.name
class Student(UserProfile):
courses = models.ManyToManyField(Course , null = True, blank = True, related_name = _('student'))
Now, whenever I want go see the Student within the Django Admin, I get this error :
Exception Value: No such column: profiles_userprofile.about_me
But, it EXISTS !! this is the output of ./manage.py sqlall profiles :
BEGIN;
CREATE TABLE "profiles_userprofile" (
"id" integer NOT NULL PRIMARY KEY,
"mugshot" varchar(100) NOT NULL,
"privacy" varchar(15) NOT NULL,
"about_me" text, ## Her it is !!!
"facebook_id" bigint UNIQUE,
"access_token" text NOT NULL,
"facebook_name" varchar(255) NOT NULL,
"facebook_profile_url" text NOT NULL,
"website_url" text NOT NULL,
"blog_url" text NOT NULL,
"image" varchar(255),
"date_of_birth" date,
"gender" varchar(1),
"raw_data" text NOT NULL,
"user_id" integer NOT NULL UNIQUE REFERENCES "auth_user" ("id"),
"department_id" integer,
"name" varchar(100) NOT NULL,
"birthday" date NOT NULL
)
;
I'm so so confused .. can anybody gives me a hint please ??
sqlall only tells you the SQL that would be sent, if you were running syncdb for the first time. It does not give you the actual state of the database. You must actually run syncdb to have the tables created. Further, if any of the tables already existed, syncdb will not make any changes to the table(s); it only creates tables, never alters them.
If you need to alter the table, you will either have to manually run SQL on your database, or use something like South to do a migration.