MySQL: Multiple Inserts for a single column - sql

I'm looking for a way to do multiple row inserts when I'm only inserting data for a single column.
Here is the example table:
+-------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+----------------+
| id | tinyint(4) | NO | PRI | NULL | auto_increment |
| name | varchar(40) | NO | UNI | NULL | |
+-------+-------------+------+-----+---------+----------------+
I want to be able to insert something like ('admin', 'author', 'mod', 'user', 'guest') into the name column for each row.
The MySQL documentation shows that multiple inserts should be in the format:
INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);
However my statement ends up looking like this:
INSERT INTO User_Role(name) VALUES ('admin','author','mod','user','guest');
And I get the following:
ERROR 1136 (21S01): Column count doesn't match value count at row 1
Meaning that it thinks I'm trying to do a single row insert.
I'm not sure if I'm just missing something simple here, but I don't see anything in particular in the MySQL docs for this use case.

your syntax is a bit off. put parentheses around each data "set" (meaning a single value in this case) that you are trying to insert.
INSERT INTO User_Roll(name) VALUES ('admin'), ('author'), ('mod'), ('user'), ('guest');

I will advise you Don't put multiple values in a column.
make a new table:
INSERT INTO table_name (id, name) VALUES (1, 'name1'), (1, 'name2'), (1, 'name3'), (1, 'name4');

Related

Get back the id of each insert in SQL Server

Let's say we want to insert two users and I want to know the userId of each record I inserted.
Example:
Db:
User.lookup database with these columns:
UserId(PK, identity) | Username
Setup, insert two users:
declare #users table (uniqueId INT, name nvarchar(100));
insert into #users (0, 'TestUser')--Two users with the same name, they'll get a different userid in the db
insert into #users (1, 'TestUser')--Uniqueid is just an autonumber I use to tell the difference between them.
Insert statement:
insert into user.lookup (userName)
output inserted.userid
select name from #users;
This will return two usersIds, example 1 & 2. But how do I know which of the two users got which userId?
I can differentiate them in code with their 'uniqueid' I pass but I don't know how to return it.
Don't just output the id. You can include other columns:
insert into user.lookup (userName)
output inserted.*
select name from #users;
Here is a db<>fiddle.
You can't correlate the inserted rows with the database-assigned IDs, at least not without inserting an alternate key as well. INSERT ... OUTPUT will not let you output a row that wasn't actually inserted, so the column that correlates the un-keyed rows with the new key values has to be actually inserted.
So the options are:
To use a SEQUENCE instead of IDENTITY and and either assign IDs to the table variable before insert, or assign IDs to the entities on the client, eg by calling sp_sequence_get_range.
Use MERGE instead of INSERT. This is what Entity Framework Core does. See eg The Case of Entity Framework Core’s Odd SQL
As Gordon explained, one can output more than 1 column.
But just to put my 2 cents in, such insert doesn't really need an intermediate table variable.
create table lookup (
lookupId int identity primary key,
userName nvarchar(100),
createdOn datetime2 not null
default sysdatetime()
)
GO
✓
insert into lookup (userName) values
('TestUser1')
,('TestUser2')
;
GO
2 rows affected
insert into lookup (userName)
output inserted.lookupId, inserted.userName
values
('Testuser3'),
('Testuser3')
GO
lookupId | userName
-------: | :--------
3 | Testuser3
4 | Testuser3
select lookupId, userName
--, convert(varchar,createdOn) as createdOn
from lookup
order by lookupId
GO
lookupId | userName
-------: | :--------
1 | TestUser1
2 | TestUser2
3 | Testuser3
4 | Testuser3
db<>fiddle here

How to make a table column always use the default value during insertion?

I want a timestamp column whose value is always the moment when it's created(specified by the default current_timestamp clause), the value client privided is omitted or cause an error.
create table test
(
creation_time timestamp default current_timestamp
);
insert into test(creation_time)
values (make_timestamp(1999, 1, 8, 1, 1, 1));
After exection, the table is:
+----------------------------+
| creation_time |
| 1999-01-08 01:01:01.000000 |
+----------------------------+
What I want is :
+----------------------------+
| creation_time |
| 2019-06-09 16:07:01.780816 |
+----------------------------+
That's not possible with a default value.
You should create a trigger that sets the timestamp.
Impaler's method is fine if you have a table with no other columns. If you do, it is simplest just to leave out the value:
insert into test (col)
values (<value>);
Or if you really want to be Postgres specific with no other columns:
insert into test
select;
EDIT:
I would suggest that you prevent inserting directly into the table and only allow updated through a view:
create view v_test as
select . . . -- all columns but the timestamp
from test;
Then only allow updates through the view.
Use the DEFAULT keyword, as in:
insert into test(creation_time)
values (default);
DEFAULT is part of the SQL standard.

How to insert row in a table every time I insert a new row in the main table?

I have a Vb.net app that is connected to an Access 2010* Database, I have a table with personal information of many students and another table with multiple true/false fields for every course the student has succed.
The structure is something like this
Table students
|Id_student | Name | Phone |
Table finishedCourses
| Id_stutent | chemistry | physics | maths |
How can I add a new row into finishedCourses table every time that I insert a new row into students table.
I don't know how add the rows with the same id in both tables.
I expect something like this
Table students
Id_student | Name | Phone
1234456 | abc | 12432534645
Table finishedCourses
Id_stutent | chemistry | physics | maths
1234456 | false | false | false
The default values for Courses are `False'. Initial status of each course is incomplete.
I tried to undestand what you want, you want to insert a initial values in finishedCourses when you insert a student information in table students, am I right?
I am not familiar with the Access database, I realized that the Access database may not have the trigger function, otherwise you can use the trigger to implement your requirement.
And in this problem, you can just write two insert SQLs to complete this with the same studentId, like below:
insert into students(id_101, 'Bob', '88089901');
insert into finishedCourses(id_101, false, false, false...);
If you are using SQL server then you can use trigger. The sample code is as below, I have no idea about MS-Access
CREATE TRIGGER trgAfterInsert ON [dbo].[students]
FOR INSERT
AS
DECLARE #Id_stutent int;
SELECT #Id_stutent=i.Id_student FROM inserted i;
INSERT INTO finishedCourses
(Id_stutent,chemistry,physics,maths)
VALUES(#Id_stutent ,false , false , false ,false );
GO

Check constraint for a flag column

Database is MS SQLServer
Data example:
| Name | defaultValue | value |
| one | true | valone |
| one | false | valtwo |
| one | false | valthree |
I'm after a way of constraining the table such that each 'Name' can only have one row with 'defaultValue' set to true
Create a computed column like this:
ALTER TABLE yourtable
ADD ValueCheck AS CASE defaultValue
WHEN true THEN 1
WHEN false THEN NULL
END
and then add unique constraint for (Name, ValueCheck)
I liked Michael's idea but it will only allow you one false value per name in SQL Server. To avoid this how about using
ALTER TABLE yourtable
ADD [ValueCheck] AS
(case [defaultValue] when (1) then ('~Default#?#') /*Magic string!*/
else value end) persisted
and then add unique constraint for (Name, ValueCheck).
I am assuming that name, value combinations will be unique. If the value column does not allow NULLs then using NULL rather than the magic string would be preferable otherwise choose a string that cannot appear in the data (e.g. 101 characters long if the value column only allows 100 chars)
You can use a TRIGGER to validate this constraint on update or insert events and roll back the transaction if it was invalid.

Making PostgreSQL a little more error tolerant?

This is sort of a general question that has come up in several contexts, the example below is representative but not exhaustive. I am interested in any ways of learning to work with Postgres on imperfect (but close enough) data sources.
The specific case -- I am using Postgres with PostGIS for working with government data published in shapefiles and xml. Using the shp2pgsql module distributed with PostGIS (for example on this dataset) I often get schema like this:
Column | Type |
------------+-----------------------+-
gid | integer |
st_fips | character varying(7) |
sfips | character varying(5) |
county_fip | character varying(12) |
cfips | character varying(6) |
pl_fips | character varying(7) |
id | character varying(7) |
elevation | character varying(11) |
pop_1990 | integer |
population | character varying(12) |
name | character varying(32) |
st | character varying(12) |
state | character varying(16) |
warngenlev | character varying(13) |
warngentyp | character varying(13) |
watch_warn | character varying(14) |
zwatch_war | bigint |
prog_disc | bigint |
zprog_disc | bigint |
comboflag | bigint |
land_water | character varying(13) |
recnum | integer |
lon | numeric |
lat | numeric |
the_geom | geometry |
I know that at least 10 of those varchars -- the fips, elevation, population, etc., should be ints; but when trying to cast them as such I get errors. In general I think I could solve most of my problems by allowing Postgres to accept an empty string as a default value for a column -- say 0 or -1 for an int type -- when altering a column and changing the type. Is this possible?
If I create the table before importing with the type declarations generated from the original data source, I get better types than with shp2pgsql, and can iterate over the source entries feeding them to the database, discarding any failed inserts. The fundamental problem is that if I have 1% bad fields, evenly distributed over 25 columns, I will lose 25% of my data since a given insert will fail if any field is bad. I would love to be able to make a best-effort insert and fix any problems later, rather than lose that many rows.
Any input from people having dealt with similar problems is welcome -- I am not a MySQL guy trying to batter PostgreSQL into making all the same mistakes I am used to -- just dealing with data I don't have full control over.
Could you produce a SQL file from shp2pgsql and do some massaging of the data before executing it? If the data is in COPY format, it should be easy to parse and change "" to "\N" (insert as null) for columns.
Another possibility would be to use shp2pgsql to load the data into a staging table where all the fields are defined as just 'text' type, and then use an INSERT...SELECT statement to copy the data to your final location, with the possibility of massaging the data in the SELECT to convert blank strings to null etc.
I don't think there's a way to override the behaviour of how strings are converted to ints and so on: possibly you could create your own type or domain, and define an implicit cast that was more lenient... but this sounds pretty nasty, since the types are really just artifacts of how your data arrives in the system and not something you want to keep around after that.
You asked about fixing it up when changing the column type: you can do that too, for example:
steve#steve#[local] =# create table test_table(id serial primary key, testvalue text not null);
NOTICE: CREATE TABLE will create implicit sequence "test_table_id_seq" for serial column "test_table.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "test_table_pkey" for table "test_table"
CREATE TABLE
steve#steve#[local] =# insert into test_table(testvalue) values('1'),('0'),('');
INSERT 0 3
steve#steve#[local] =# alter table test_table alter column testvalue type int using case testvalue when '' then 0 else testvalue::int end;
ALTER TABLE
steve#steve#[local] =# select * from test_table;
id | testvalue
----+-----------
1 | 1
2 | 0
3 | 0
(3 rows)
Which is almost equivalent to the "staging table" idea I suggested above, except that now the staging table is your final table. Altering a column type like this requires rewriting the entire table anyway: so actually, using a staging table and reformatting multiple columns at once is likely to be more efficient.