I inherited a database using Rails that doesn't set id incrementally, and also has two primary keys:
CREATE TABLE `t_user_history` (
`id` int(11) NOT NULL,
`history_no` int(11) NOT NULL,
`user_login_id` varchar(10) COLLATE utf8_unicode_ci NOT NULL,
`user_name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`user_pass` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`deleted_at` datetime DEFAULT NULL,
`app_version` varchar(31) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`history_no`,`id`))
However, when I try to insert into this table using ruby as such:
tuser = TUserHistory.find_by_id(user.id)
TUserHistory.transaction do
ntuser = TUserHistory.new
ntuser.id = tuser.id
ntuser.history_no = 0
ntuser.user_login_id = tuser.user_login_id
ntuser.user_name = tuser.user_name
ntuser.user_pass = tuser.user_pass
ntuser.app_version = params[:app]
ntuser.save
I get the error:
getName {"error_code":"Mysql2::Error: Column 'id' cannot be null: INSERT INTO `t_user_history`
(`app_version`, `created_at`, `deleted_at`, `history_no`, `id`, `updated_at`, `user_login_id`, `user_name`, `user_pass`)
VALUES ('v1.2.9', '2012-08-30 09:26:57', NULL, 0, NULL, '2012-08-30 09:26:57', 'userlogin', 'username', 'userpass')"}
Even if I set ntuser.id = 9127 or some other value, it still says that 'id' cannot be null.
I looked at other answers that say it is indeed possible to modify this value, but it seems as though whatever value I attempt to set for ntuser.id gets ignored.
Trashing the table and starting again in a sane manner is not allowed, as this table is already being used by our services. I thought I'd create a new column for user_id, before I found out it didn't auto-increment, but even before getting to that step nothing, not even ntuser.id = 0 or deleting the line that defines ntuser.id, works.
What is going on here? Why isn't it recognizing the data is has been passed? What is the best (no, fastest) way to fix this?
Edit: Rails version 3.1.0
TUserHistory class:
class TUserHistory < ActiveRecord::Base
set_table_name "t_user_history"
default_scope select("id, user_login_id,user_name,user_pass,app_version")
acts_as_paranoid
end
Finally got around to doing it, by just doing an SQL statement. I wanted to use rails, seeing as how rails/ruby have wrappers for SQL code, but it just wasn't working. In the end, it looks like:
tuser = TUserHistory.order("history_no DESC").find_by_id(user.id) #get last entry
TUserHistory.transaction do
id = tuser.id
history_no = Integer(tuser.history_no)
intHist_no = history_no + 1 #because this column doesn't auto-increment
user_login_id = tuser.user_login_id
user_name = tuser.user_name
user_pass = tuser.user_pass
app_version = params[:app]
sql = "INSERT INTO t_user_history (id, history_no, user_login_id,user_name,user_pass,app_version) VALUES ('#{id}','#{intHist_no}','#{user_login_id}','#{user_name}','#{user_pass}','#{app_version}')"
ActiveRecord::Base.connection.execute(sql)
The only other thing that changed is that the class description for TUserHistory had history_no added so that it would return with the tuser select call, and could then be modified.
Related
I have the following scenario in my system:
a member:
CREATE TABLE `member` (
`memberid` int(11) NOT NULL,
`email` text
);
creates the protocols:
CREATE TABLE `protocol` (
`protocolid` int(11) NOT NULL,
`createdby` int(11) NOT NULL,
`status` varchar(256) DEFAULT NULL
) ;
member can create a feedback post on the protocols
CREATE TABLE `protocolpost` (
`protocolid` int(11) NOT NULL,
`protocolpostid` int(11) NOT NULL,
`createdby` text
) ;
member can reply to the feedback
CREATE TABLE `protocolpostcomment` (
`protocolpostcommentid` int(11) NOT NULL,
`protocolpostid` int(11) NOT NULL,
`commentedby` varchar(256) DEFAULT NULL,
`hasfeedbackreplyviewed` tinyint(1) DEFAULT NULL
) ;
I wanted to get the total count of replies from all post comments made on a protocol created by a member, excluding counts of a user who created that protocol, and comments made by the author of the post.
I have written this query so far, but this query returns all the post comments, I wanted to exclude the reply done by the feedback creator.
SELECT
protocols.*,
protocolFeedbackReply.*,
protocolfeedback.*
FROM protocolpost AS protocolfeedback
JOIN protocol AS protocols
ON protocols.protocolid = protocolfeedback.protocolid
JOIN protocolpostcomment AS protocolFeedbackReply
ON protocolfeedback.protocolpostid =
protocolFeedbackReply.protocolpostid
WHERE protocols.createdby = 1038
AND protocols.status = "published"
AND protocolFeedbackReply.hasfeedbackreplyviewed = 0
AND protocolfeedback.createdby NOT LIKE Concat('%', (SELECT email
FROM member
WHERE
memberid = 1038),
'%');
I have attached a dbfiddle here:
In the dbfiddle example only the comment that is done by the user nwxaofrc#tempemail.com, , should be on the count.
Thank you for your very good description. Your query is difficult to read and might be simplified if possible. Anyway, within your NOT LIKE condition, it seems you need to check protocolFeedbackReply.commentedby instead of protocolfeedback.createdby, see db<>fiddle
I'm having a staging table and a datawarehouse table, which keep giving me constraint violation. i can't seem to figure out why since DRIVERID and RACEID a combination of those should be unique? How come i get contraint violation - primary key
table
CREATE TABLE QUALIFYING (
QUALIFYID DECIMAL(18,0) IDENTITY NOT NULL,
RACEID DECIMAL(18,0) DEFAULT '0' NOT NULL,
DRIVERID DECIMAL(18,0) DEFAULT '0' NOT NULL,
CONSTRUCTORID DECIMAL(18,0) DEFAULT '0' NOT NULL,
DRIVERNUMBER DECIMAL(18,0) DEFAULT '0' NOT NULL,
DRIVERPOSITION DECIMAL(18,0) DEFAULT NULL,
Q1 VARCHAR(255) UTF8 DEFAULT NULL,
Q2 VARCHAR(255) UTF8 DEFAULT NULL,
Q3 VARCHAR(255) UTF8 DEFAULT NULL,
PRIMARY KEY(QUALIFYID)
);
Staging
CREATE OR REPLACE TABLE STGQUALIFYING(
raceId int DEFAULT '0' NOT NULL,
driverId int DEFAULT '0' NOT NULL,
constructorId int DEFAULT '0' NOT NULL,
driverNumber int DEFAULT '0' NOT NULL,
driverPosition int DEFAULT NULL,
q1 varchar(255) DEFAULT NULL,
q2 varchar(255) DEFAULT NULL,
q3 varchar(255) DEFAULT NULL,
PRIMARY KEY(RACEID, DRIVERID)
);
SQL
MERGE INTO QUALIFYING c
USING STGQUALIFYING n
ON
(n.RACEID = c.RACEID AND n.DRIVERID = c.DRIVERID)
WHEN MATCHED THEN
UPDATE SET
CONSTRUCTORID = n.CONSTRUCTORID, DRIVERNUMBER = n.DRIVERNUMBER, DRIVERPOSITION = n.DRIVERPOSITION, Q1 = n.Q1, Q2 = n.Q2, Q3 = n.Q3
WHEN NOT MATCHED THEN
INSERT (RACEID, DRIVERID, CONSTRUCTORID, DRIVERNUMBER, DRIVERPOSITION, Q1, Q2, Q3) VALUES
(RACEID, DRIVERID, CONSTRUCTORID, DRIVERNUMBER, DRIVERPOSITION, Q1, Q2, Q3);
The EXASolution user manual says:
The content of an identity column applies to the following rules:
If you specify an explicit value for the identity column while inserting a row, then this value is inserted.
In all other cases monotonically increasing numbers are generated by the system, but gaps can occur between the numbers.
and
You should not mistake an identity column with a constraint, i.e. identity columns do not guarantee unique values. But the values are unique as long as values are inserted only implicitly and are not changed manually.
You've put a primary key constraint on your identity column, so it must be unique. Since you are getting duplicates from your merge, either (a) you have, at some point, provided explicit values as in the first bullet above or updated a value manually, and the monotonically increasing sequence has reached a point where it is clashing with those existing values; or (b) there's a bug in their merge. The former seems more likely.
You can look at recently inserted value if you have one, or do a temporary insert of a new row (with merge) to see if it will create a row successfully, and if so whether you already have ID values higher than the one it allocates for that new row. If there are no higher values already, and insert works and merge continues to fail consistently, then it sounds like something you'd need to raise with EXASolution.
I'm using both django-userena and django-facebook as my main registration apps.
Let's inherit my own UserProlfile from both of them:
from userena.models import UserenaBaseProfile
from django_facebook.models import FacebookProfileModel
class UserProfile(UserenaBaseProfile , FacebookProfileModel):
user = models.OneToOneField(User, unique=True, verbose_name=_('user'), related_name='user_profile')
department = models.ForeignKey('Department' , null = True , blank = True , related_name=_('user'))
name = models.CharField(max_length = 100)
birthday = models.DateField()
def __unicode__(self):
return self.name
class Student(UserProfile):
courses = models.ManyToManyField(Course , null = True, blank = True, related_name = _('student'))
Now, whenever I want go see the Student within the Django Admin, I get this error :
Exception Value: No such column: profiles_userprofile.about_me
But, it EXISTS !! this is the output of ./manage.py sqlall profiles :
BEGIN;
CREATE TABLE "profiles_userprofile" (
"id" integer NOT NULL PRIMARY KEY,
"mugshot" varchar(100) NOT NULL,
"privacy" varchar(15) NOT NULL,
"about_me" text, ## Her it is !!!
"facebook_id" bigint UNIQUE,
"access_token" text NOT NULL,
"facebook_name" varchar(255) NOT NULL,
"facebook_profile_url" text NOT NULL,
"website_url" text NOT NULL,
"blog_url" text NOT NULL,
"image" varchar(255),
"date_of_birth" date,
"gender" varchar(1),
"raw_data" text NOT NULL,
"user_id" integer NOT NULL UNIQUE REFERENCES "auth_user" ("id"),
"department_id" integer,
"name" varchar(100) NOT NULL,
"birthday" date NOT NULL
)
;
I'm so so confused .. can anybody gives me a hint please ??
sqlall only tells you the SQL that would be sent, if you were running syncdb for the first time. It does not give you the actual state of the database. You must actually run syncdb to have the tables created. Further, if any of the tables already existed, syncdb will not make any changes to the table(s); it only creates tables, never alters them.
If you need to alter the table, you will either have to manually run SQL on your database, or use something like South to do a migration.
Can anyone see why I'm getting this error is causing an error:
#1136 - Column count doesn't match value count at row 1
Here is the query:
INSERT INTO `people`
(`id`,`title`,`first_name`,`middle_initial`,`preferred_name`,`last_name`,
`home_phone`,`mobile_phone`,`email`,`gender`,`date_of_birth`,`qff`,`status`)
VALUES ('20','Mr','first','mid','pref','fam',
'home','mobile','email','male','0000-00-00','qff','active')
ON DUPLICATE KEY UPDATE
`people`.`id` = LAST_INSERT_ID(`people`.`id`),
`people`.`title` = 'Mr',
`people`.`first_name` = 'first',
`people`.`middle_initial` = 'mid',
`people`.`preferred_name` = 'pref',
`people`.`last_name` = 'fam',
`people`.`home_phone` = 'home',
`people`.`mobile_phone` = 'mobile',
`people`.`email` = 'email',
`people`.`gender` = 'male',
`people`.`date_of_birth` = '0000-00-00',
`people`.`qff` = 'qff',
`people`.`status` = 'active'
And the table structure:
CREATE TABLE `people` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`title` text,
`first_name` text,
`middle_initial` text,
`preferred_name` text,
`last_name` text,
`home_phone` text,
`mobile_phone` text,
`email` text,
`gender` enum('male','female') DEFAULT NULL,
`date_of_birth` date DEFAULT NULL,
`qff` varchar(20) NOT NULL,
`status` enum('active','inactive') NOT NULL,
`updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`updated_by` int(10) unsigned DEFAULT NULL,
`updated_by_type` enum('person','admin') DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
I had the exact same problem a while ago - for me the issue was related to a trigger on the table in question.
Recently I had the same problem, But I used batch insert/update ,my problem is not about trigger , its the 'foreach' problem, if u used the total 'foreach' like
<foreach collection="meters" index="index" item="meter" open="(" close=")" separator=",">
</foreach>
but the error code is
Error Code: 1136. Column count doesn't match value count at row 1 0.000 sec
for my test
it will add another () for your code (I didn't check the log).
so we can user
< foreach collection="medichines" index="index" item="medichine" separator=",">
(
)
< /foreach>
this way can fix you error
I have a simple card table:
CREATE TABLE `users_individual_cards` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` char(36) NOT NULL,
`individual_card_id` int(11) NOT NULL,
`own` int(10) unsigned NOT NULL,
`want` int(10) unsigned NOT NULL,
`trade` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `user_id` (`user_id`,`individual_card_id`),
KEY `user_id_2` (`user_id`),
KEY `individual_card_id` (`individual_card_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1;
I have ajax to add and remove the records based on OWN, WANT, and TRADE. However, if the user removes all of the OWN, WANT, and TRADE cards, they go to zero but it will leave the record in the database. I would prefer to have the record removed. Is checking after each "update" to see if all the columns = 0 the only way to do this? Or can I set a conditional trigger with something like:
//psuedo sql
AFTER update IF (OWN = 0, WANT = 0, TRADE = 0) DELETE
What is the best way to do this? Can you help with the syntax?
Why not just fire two queries from PHP (or other front end)?
update `users_individual_cards` ...
delete `users_individual_cards` where ... (same condition) and own + want + trade = 0
The trigger will be:
CREATE TRIGGER users_individual_cards_trigger
AFTER UPDATE ON users_individual_cards
FOR EACH ROW
BEGIN
DELETE FROM users_individual_cards
WHERE 'OWN' = 0 AND 'WANT' = 0 AND 'TRADE' = 0;
END$$
The solutions throw the delete query will be better because not all versions of mysql support it.