Django, UserProfile inheritance nightmare? - sql

I'm using both django-userena and django-facebook as my main registration apps.
Let's inherit my own UserProlfile from both of them:
from userena.models import UserenaBaseProfile
from django_facebook.models import FacebookProfileModel
class UserProfile(UserenaBaseProfile , FacebookProfileModel):
user = models.OneToOneField(User, unique=True, verbose_name=_('user'), related_name='user_profile')
department = models.ForeignKey('Department' , null = True , blank = True , related_name=_('user'))
name = models.CharField(max_length = 100)
birthday = models.DateField()
def __unicode__(self):
return self.name
class Student(UserProfile):
courses = models.ManyToManyField(Course , null = True, blank = True, related_name = _('student'))
Now, whenever I want go see the Student within the Django Admin, I get this error :
Exception Value: No such column: profiles_userprofile.about_me
But, it EXISTS !! this is the output of ./manage.py sqlall profiles :
BEGIN;
CREATE TABLE "profiles_userprofile" (
"id" integer NOT NULL PRIMARY KEY,
"mugshot" varchar(100) NOT NULL,
"privacy" varchar(15) NOT NULL,
"about_me" text, ## Her it is !!!
"facebook_id" bigint UNIQUE,
"access_token" text NOT NULL,
"facebook_name" varchar(255) NOT NULL,
"facebook_profile_url" text NOT NULL,
"website_url" text NOT NULL,
"blog_url" text NOT NULL,
"image" varchar(255),
"date_of_birth" date,
"gender" varchar(1),
"raw_data" text NOT NULL,
"user_id" integer NOT NULL UNIQUE REFERENCES "auth_user" ("id"),
"department_id" integer,
"name" varchar(100) NOT NULL,
"birthday" date NOT NULL
)
;
I'm so so confused .. can anybody gives me a hint please ??

sqlall only tells you the SQL that would be sent, if you were running syncdb for the first time. It does not give you the actual state of the database. You must actually run syncdb to have the tables created. Further, if any of the tables already existed, syncdb will not make any changes to the table(s); it only creates tables, never alters them.
If you need to alter the table, you will either have to manually run SQL on your database, or use something like South to do a migration.

Related

changing the name of a column in postgresql database

I'm trying to modify the name a column named "photo_url". I tried to simply changing the string name to "test" and killing the postgresql service and then re starting it again, but it doesn't seem to be working; it still loads up as "photo_url".
I'm not sure how to change the name if anyone could help me it would be greatly appreciated.
this is my table im using postgreSQL, and pgweb to view my database, i used dbdesigner to generate this schema
CREATE TABLE "users" (
"user_id" serial NOT NULL,
"name" TEXT NOT NULL,
"instrument" TEXT NOT NULL,
"country" TEXT NOT NULL,
"state" TEXT NOT NULL,
"city" TEXT NOT NULL,
"about" TEXT NOT NULL,
"email" TEXT NOT NULL UNIQUE,
"hashed_password" TEXT NOT NULL,
"photo_url" TEXT NOT NULL,
"created_at" timestamptz NOT NULL default now(),
CONSTRAINT "users_pk" PRIMARY KEY ("user_id")
) WITH (
OIDS=FALSE
);
If you've already created the table, you can use this query to rename the column
ALTER TABLE users RENAME COLUMN photo_url TO test;
otherwise simply recreate your table with the new column name.
More information on the ALTER TABLE command can be found in the PostgreSQL Docs.

How to use RODBC to save dataframe to table with primary key generated at database

I would like to enter a data frame into an existing table in a database using an R script, and I want the table in the database to have a sequential primary key. My problem is that RODBC doesn't seem to allow the primary key constraint.
Here's the SQL for creating the table I want:
CREATE TABLE [dbo].[results] (
[ID] INT IDENTITY (1, 1) NOT NULL,
[FirstName] VARCHAR (255) NULL,
[LastName] VARCHAR (255) NULL,
[Birthday] DATETIME NULL,
[CreateDate] DATETIME NULL,
CONSTRAINT [PK_dbo.results] PRIMARY KEY CLUSTERED ([ID] ASC)
);
And a test with some R code:
ConnectionString1="Driver=ODBC Driver 11 for SQL Server;Server=myserver; Database=TestDb; trusted_connection=yes"
ConnectionString2="Driver=ODBC Driver 11 for SQL Server;Server=notmyserver; Database=TestDb; trusted_connection=yes"
db1=odbcDriverConnect(ConnectionString1)
query="SELECT a.[firstname] as FirstName
, a.[lastname] as LastName
, Cast(a.[dob] as datetime) as Birthday
, cast(a.createDate as datetime) as CreateDate
FROM [dbo].[People] a"
results=NULL
results=sqlQuery(db1,query,stringsAsFactors=FALSE)
close(db1)
db2=odbcDriverConnect(ConnectionString)
sqlSave(db2,
results,
append = TRUE,
varTypes=c(Birthday="datetime", CreateDate="datetime"),
colnames = FALSE,
rownames = FALSE,fast=FALSE)
close(db2)
The first part of the R code is just getting some test data into a dataframe--it works fine and it's not part of my question here (I'm just including it here so you can see what format the test data is). When I run the sqlSave function I get an error message:
Error in dimnames(x) <- dn :
length of 'dimnames' [2] not equal to array extent
However, if I remove the primary key from the database, everything works fine with this table:
CREATE TABLE [dbo].[results] (
[FirstName] VARCHAR (255) NULL,
[LastName] VARCHAR (255) NULL,
[Birthday] DATETIME NULL,
[CreateDate] DATETIME NULL
);
Clearly the primary key is the issue. Normally with entity framework or whatever (as I understand it), the primary key is created at the database when you enter data.
I'd like a way to append data to a table with a primary key using only an R script. Is that possible? There could already be data in the table I'm adding to, so I don't really see a way to create keys in R before trying to append to the table.
The problem is line 361 in http://github.com/cran/RODBC/blob/master/R/sql.R - the data.frame and the DB table must have exactly the same number of columns otherwise you get this error with this stacktrace:
Error in dimnames(x) <- dn :
length of 'dimnames' [2] not equal to array extent
3. `colnames<-`(`*tmp*`, value = c("ID", "FirstName", "LastName",
"Birthday", "CreateDate")) at sql.R#361
2. sqlwrite(channel, tablename, dat, verbose = verbose, fast = fast,
test = test, nastring = nastring) at sql.R#211
1. sqlSave(db2, results, append = TRUE, varTypes = c(Birthday = "datetime",
CreateDate = "datetime"), colnames = FALSE, rownames = FALSE,
fast = FALSE, verbose = TRUE)
If you add the ID column to your data.frame you can no longer use the autoinc ID column so this is no solution (or workaround).
A "simple" workaround to the "same columns" limitation of RODBC::sqlSave is:
Use sqlSave to save the new rows into another table name
Send an insert into ... select from ... via RODBC::sqlQuery to append the new rows to your original table that includes the autoinc ID
column
Delete the table with the new rows again (drop table...)
A better option would be to use the new odbc package which also offers better performance through bulk-alike inserts instead of sending single insert statements like RODBC does:
https://github.com/r-dbi/odbc
Look for the function dbWriteTable (which is an implementation of the interface DBI::dbWriteTable).

First DB - How to structure required information

I watched a few youtube videos about how to structure a database using tables and fields. I am a bit confused about how to strucuture my information.
I have put my attempt below:
// Identifier Table
// This is where we give each item a new unique identifier
UniqueID []
// Item Table
// This is where the main content goes which is displayed
UniqueID []
Title []
Description []
Date []
Location []
Coordinates []
Source []
Link []
// Misc Table
// This is additional useful information, but not displayed
geocoded []
country name []
By separating out the uniqueID when I delete a record I can make sure that new records still have a unique incrementing ID. Can I get some feedback on how I divided up my data into three tables.
you gave us no hint what you want to represent in your db.
For example: if location and coordinate describe a building or maybe room, than it could be useful to save that information in an extra table and have a relationship from item to it, as this would allow to easily fetch all items connected with on place.
Of course you should apply the same principle for country: a locations lays with-in a country.
BEGIN;
CREATE TABLE "country" (
"id" integer NOT NULL PRIMARY KEY,
"name" varchar(255) NOT NULL
)
;
CREATE TABLE "location" (
"id" integer NOT NULL PRIMARY KEY,
"name" varchar(255) NOT NULL,
"coordinate" varchar(255) NOT NULL,
"country_id" integer NOT NULL REFERENCES "country" ("id")
)
;
CREATE TABLE "item" (
"id" integer NOT NULL PRIMARY KEY,
"title" varchar(25) NOT NULL,
"description" text NOT NULL,
"date" datetime NOT NULL,
"source" varchar(255) NOT NULL,
"link" varchar(255) NOT NULL,
"location_id" integer NOT NULL REFERENCES "location" ("id")
)
;
In the case stated above I would pack everything into one table since there is not enugh complexity to benfit from spliting the data into diferent tables.
When you have more metadata you can split it up into:
Item (For display data)
ItemMeta (For meta data)

Sql Update command used in play framework (scala) does not seem to be working

I have my Sql table setup as follows
create table contact(
id bigint not null,
first_name varchar(255) not null,
last_name varchar(255) not null,
phone varchar(255) not null,
email varchar(255) not null,
company varchar(255) not null,
external_access varchar(255),
online_status varchar(12),
constraint pk_computer primary key (id));
So initially i input data values into the table except for external_access and online_status.Then I try to update online_status using the function below.
DB.withConnection { implicit connection =>
SQL(
"""
update contact
set online_status = online
where email = {email}
"""
).on(
'email -> email
).executeUpdate()
}
So after the online status is updated, I try to display the page again by using
select * from contact
(The above code is just the gist. Actual display function is the page display function List of https://github.com/playframework/Play20/blob/master/samples/scala/computer-database/app/models/Models.scala)
However, the online_status is not yet updated. It continues to display nothing(in the online_status column). Can someone help me debug this
Online status is a varchar, problably your query should be like:
"""
update contact
set online_status = 'online'
where email = {email}
"""
Notice the '' to define the value as string.
Also, make sure you are not storing the old values in a cache.

Can't assign id, "Column 'id' cannot be null"

I inherited a database using Rails that doesn't set id incrementally, and also has two primary keys:
CREATE TABLE `t_user_history` (
`id` int(11) NOT NULL,
`history_no` int(11) NOT NULL,
`user_login_id` varchar(10) COLLATE utf8_unicode_ci NOT NULL,
`user_name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`user_pass` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`deleted_at` datetime DEFAULT NULL,
`app_version` varchar(31) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`history_no`,`id`))
However, when I try to insert into this table using ruby as such:
tuser = TUserHistory.find_by_id(user.id)
TUserHistory.transaction do
ntuser = TUserHistory.new
ntuser.id = tuser.id
ntuser.history_no = 0
ntuser.user_login_id = tuser.user_login_id
ntuser.user_name = tuser.user_name
ntuser.user_pass = tuser.user_pass
ntuser.app_version = params[:app]
ntuser.save
I get the error:
getName {"error_code":"Mysql2::Error: Column 'id' cannot be null: INSERT INTO `t_user_history`
(`app_version`, `created_at`, `deleted_at`, `history_no`, `id`, `updated_at`, `user_login_id`, `user_name`, `user_pass`)
VALUES ('v1.2.9', '2012-08-30 09:26:57', NULL, 0, NULL, '2012-08-30 09:26:57', 'userlogin', 'username', 'userpass')"}
Even if I set ntuser.id = 9127 or some other value, it still says that 'id' cannot be null.
I looked at other answers that say it is indeed possible to modify this value, but it seems as though whatever value I attempt to set for ntuser.id gets ignored.
Trashing the table and starting again in a sane manner is not allowed, as this table is already being used by our services. I thought I'd create a new column for user_id, before I found out it didn't auto-increment, but even before getting to that step nothing, not even ntuser.id = 0 or deleting the line that defines ntuser.id, works.
What is going on here? Why isn't it recognizing the data is has been passed? What is the best (no, fastest) way to fix this?
Edit: Rails version 3.1.0
TUserHistory class:
class TUserHistory < ActiveRecord::Base
set_table_name "t_user_history"
default_scope select("id, user_login_id,user_name,user_pass,app_version")
acts_as_paranoid
end
Finally got around to doing it, by just doing an SQL statement. I wanted to use rails, seeing as how rails/ruby have wrappers for SQL code, but it just wasn't working. In the end, it looks like:
tuser = TUserHistory.order("history_no DESC").find_by_id(user.id) #get last entry
TUserHistory.transaction do
id = tuser.id
history_no = Integer(tuser.history_no)
intHist_no = history_no + 1 #because this column doesn't auto-increment
user_login_id = tuser.user_login_id
user_name = tuser.user_name
user_pass = tuser.user_pass
app_version = params[:app]
sql = "INSERT INTO t_user_history (id, history_no, user_login_id,user_name,user_pass,app_version) VALUES ('#{id}','#{intHist_no}','#{user_login_id}','#{user_name}','#{user_pass}','#{app_version}')"
ActiveRecord::Base.connection.execute(sql)
The only other thing that changed is that the class description for TUserHistory had history_no added so that it would return with the tuser select call, and could then be modified.