looping in sql where certain value is equal to something - sql

im trying to insert values into a table in sql in one run.
INSERT INTO sampleTable
(
,ID
,aa
,bb
,cc
,dd
,ee
)
SELECT
,(select id from otherTable where value="something")
,aa
,bb
,cc
,dd
,ee
how do i loop it in sql that it inserts values for each id on the otherTable?

INSERT INTO sampleTable
(
,ID
,aa
,bb
,cc
,dd
,ee
)
SELECT
,id
,aa
,bb
,cc
,dd
,ee
from otherTable where value="something"
Explanation: If you want to SELECT..INSERT multiple lines (a set) you need to have multiple lines in your SELECT statement. This will only work with a FROM part of the query.
The best way to test INSERT..SELECT is to remove the insert part and see if it works by itself. Once you are happy with the result you can add the INSERT part in front of it.

Related

The select list for the INSERT statement contains fewer items than the insert list (but is identical)

I am trying to develop a procedure that has this basic structure:
select a.*
into #temp1
from OPENQUERY(otherDB,'SELECT ... FROM ...')a
INSERT INTO [dbo].[Data]
(....)
select *
from #temp1
DROP TABLE #temp1
The amount of columns in the results from the OPENQUERY is identical to the INSERT columns
How could I be catching this error :
The select list for the INSERT statement contains fewer items than the insert list. The number of SELECT values must match the number of INSERT columns.
What if you try to make more specific the select? Example:
insert into dbo.data (col1,col2) select col1,col2.....

Tally Table in SQL

I want to create a bunch of data with Tally table in SQL (sql2008) and definitely need help.
First of all, I have this table which contains 2 columns.
{
AcctNum (nchar(30), null),
DataInfo (nchar(745), null)
}
While I don't care the data in the DataInfo column, I do want to add about 10k of row into the table with unique AcctNum on each row.
The problem though is I need to keep the length of the data in both column. For example, AcctNum column looks like "400000000000001 ". how do I increment the number while keep the "blank space"?
Not sure if I make much sense here, but please let me know and I will try to explain more, thanks!
Using a recursive common table expression :
-- set up a table variable for demo purpose
declare #t table (AcctNum nchar(30) null, DataInfo nchar(745) null);
-- insert the starting value
insert #t values ('400000000000001', null);
-- run the cte to generate the sequence
with cte (acctnum, num) as (
select acctnum, cast(acctnum as bigint) + 1 num -- starting value
from #t
union all
select acctnum, num+1 from cte
where num < cast(acctnum as bigint) + 10000 -- stopping value
)
-- insert data sequence into the table
insert #t (AcctNum, DataInfo)
select num, null from cte
option (maxrecursion 10000);
select * from #t;
The table variable #t will now contain acctnum 400000000000001 -> 400000000010001 as a contiguous sequence.

Insert data in 3 tables at a time using Postgres

I want to insert data into 3 tables with a single query.
My tables looks like below:
CREATE TABLE sample (
id bigserial PRIMARY KEY,
lastname varchar(20),
firstname varchar(20)
);
CREATE TABLE sample1(
user_id bigserial PRIMARY KEY,
sample_id bigint REFERENCES sample,
adddetails varchar(20)
);
CREATE TABLE sample2(
id bigserial PRIMARY KEY,
user_id bigint REFERENCES sample1,
value varchar(10)
);
I will get a key in return for every insertion and I need to insert that key in the next table.
My query is:
insert into sample(firstname,lastname) values('fai55','shaggk') RETURNING id;
insert into sample1(sample_id, adddetails) values($id,'ss') RETURNING user_id;
insert into sample2(user_id, value) values($id,'ss') RETURNING id;
But if I run single queries they just return values to me and I cannot reuse them in the next query immediately.
How to achieve this?
Use data-modifying CTEs:
WITH ins1 AS (
INSERT INTO sample(firstname, lastname)
VALUES ('fai55', 'shaggk')
-- ON CONFLICT DO NOTHING -- optional addition in Postgres 9.5+
RETURNING id AS sample_id
)
, ins2 AS (
INSERT INTO sample1 (sample_id, adddetails)
SELECT sample_id, 'ss' FROM ins1
RETURNING user_id
)
INSERT INTO sample2 (user_id, value)
SELECT user_id, 'ss2' FROM ins2;
Each INSERT depends on the one before. SELECT instead of VALUES makes sure nothing is inserted in subsidiary tables if no row is returned from a previous INSERT. (Since Postgres 9.5+ you might add an ON CONFLICT.)
It's also a bit shorter and faster this way.
Typically, it's more convenient to provide complete data rows in one place:
WITH data(firstname, lastname, adddetails, value) AS (
VALUES -- provide data here
('fai55', 'shaggk', 'ss', 'ss2') -- see below
, ('fai56', 'XXaggk', 'xx', 'xx2') -- works for multiple input rows
-- more?
)
, ins1 AS (
INSERT INTO sample (firstname, lastname)
SELECT firstname, lastname -- DISTINCT? see below
FROM data
-- ON CONFLICT DO NOTHING -- UNIQUE constraint? see below
RETURNING firstname, lastname, id AS sample_id
)
, ins2 AS (
INSERT INTO sample1 (sample_id, adddetails)
SELECT ins1.sample_id, d.adddetails
FROM data d
JOIN ins1 USING (firstname, lastname)
RETURNING sample_id, user_id
)
INSERT INTO sample2 (user_id, value)
SELECT ins2.user_id, d.value
FROM data d
JOIN ins1 USING (firstname, lastname)
JOIN ins2 USING (sample_id);
db<>fiddle here
You may need explicit type casts in a stand-alone VALUES expression - as opposed to a VALUES expression attached to an INSERT where data types are derived from the target table. See:
Casting NULL type when updating multiple rows
If multiple rows can come with identical (firstname, lastname), you may need to fold duplicates for the first INSERT:
...
INSERT INTO sample (firstname, lastname)
SELECT DISTINCT firstname, lastname FROM data
...
You could use a (temporary) table as data source instead of the CTE data.
It would probably make sense to combine this with a UNIQUE constraint on (firstname, lastname) in the table and an ON CONFLICT clause in the query.
Related:
How to use RETURNING with ON CONFLICT in PostgreSQL?
Is SELECT or INSERT in a function prone to race conditions?
Something like this
with first_insert as (
insert into sample(firstname,lastname)
values('fai55','shaggk')
RETURNING id
),
second_insert as (
insert into sample1( id ,adddetails)
values
( (select id from first_insert), 'ss')
RETURNING user_id
)
insert into sample2 ( id ,adddetails)
values
( (select user_id from first_insert), 'ss');
As the generated id from the insert into sample2 is not needed, I removed the returning clause from the last insert.
Typically, you'd use a transaction to avoid writing complicated queries.
http://www.postgresql.org/docs/current/static/sql-begin.html
http://dev.mysql.com/doc/refman/5.7/en/commit.html
You could also use a CTE, assuming your Postgres tag is correct. For instance:
with sample_ids as (
insert into sample(firstname, lastname)
values('fai55','shaggk')
RETURNING id
), sample1_ids as (
insert into sample1(id, adddetails)
select id,'ss'
from sample_ids
RETURNING id, user_id
)
insert into sample2(id, user_id, value)
select id, user_id, 'val'
from sample1_ids
RETURNING id, user_id;
You could create an after insert trigger on the Sample table to insert into the other two tables.
The only issue i see with doing this is that you wont have a way of inserting adddetails it will always be empty or in this case ss. There is no way to insert a column into sample thats not actualy in the sample table so you cant send it along with the innital insert.
Another option would be to create a stored procedure to run your inserts.
You have the question taged mysql and postgressql which database are we talking about here?

INSERT INTO With a SubQuery and some operations

I'm trying to insert some data to a table contains two things : "a string" and "maximum number in Order column + 1".
This is my query:
INSERT INTO MyTable ([Text],[Order])
SELECT 'MyText' , (Max([Order]) + 1)
FROM MyTable
What is going wrong with my query?
I'm using Microsoft SQL Server 2005 SP3.
You can test this query like this:
I don't receive error:
create table #MyTable
(
[Text] varchar(40),
[Order] int NOT NULL
)
INSERT INTO #MyTable([Text],[Order])
SELECT 'MyText' [Text], isnull(max([order]) + 1, 0) [Order]
FROM #MyTable
drop table #MyTable
Original:
INSERT INTO MyTable ([Text],[Order])
SELECT 'MyText' [Text], max([Order]) + 1 [Order]
FROM MyTable
or
INSERT INTO MyTable ([Text],[Order])
SELECT top 1 'MyText' [Text], max([Order]) + 1 [Order]
FROM MyTable
limit is not valid in SQL Server as far as I know.
Cannot insert the value NULL into column 'Order', table 'master.dbo.MyTable'; column does not allow nulls. INSERT fails. The statement has been terminated.
This means that the Order column isn't allowed to be null, and that the Max([Order]) + 1 part of your column returns NULL.
This is because your table is empty, as you already noticed by yourself.
You can work around this by replacing NULL by a real number in the query, using ISNULL():
INSERT INTO MyTable ([Text],[Order])
SELECT 'MyText' , (isnull(Max([Order]),0) + 1)
FROM MyTable
Unless he has a column named OrderBy
then he would have to add / assign all values within that Insert especially if the column does not allow for nulls
sounds like fully qualifying the Insert with the dbo.MyTable.Field may make more sense.
also why are you naming fields with SQL Key words...???
INSERT INTO MyTable ([Text],[Order] Values('MyTextTest',1)
try a test insert first..

SQL query for selecting only first occurrences of rows with same data in the first column

Is there a neat SQL query that would return rows so that only first occurrences of rows, that have same data in the first column, would be returned? That is, if I have rows like
blah something
blah somethingelse
foo blah
bar blah
foo hello
the query should give me the first, third and fourth rows (because first row is the first occurrence of "blah" in the first column", third row is the first occurrence of "foo" in the first column, and fourth row is the first occurrence of "bar" in the first column).
I'm using H2 database engine, if that matters.
Update: sorry about the unclear table definition, here's it better; the "blah", "foo" etc. denote the value of the first column in the row.
blah [rest of columns of first row]
blah [rest of columns of second row]
foo [-""- third row]
bar [-""- fourth row]
foo [-""- fifth row]
If you meant alphabetically on column 2, here is some SQL to get those rows:
create table #tmp (
c1 char(20),
c2 char(20)
)
insert #tmp values ('blah','something')
insert #tmp values ('blah','somethingelse')
insert #tmp values ('foo','ahhhh')
insert #tmp values ('foo','blah')
insert #tmp values ('bar','blah')
insert #tmp values ('foo','hello')
select c1, min(c2) c2 from #tmp
group by c1
Analytic request could do the trick.
Select *
from (
Select rank(c1) over (partition by c1) as myRank, t.*
from myTable t )
where myRank = 1
But this is only a priority 2 for the V1.3.X
http://www.h2database.com/html/roadmap.html?highlight=RANK&search=rank#firstFound
I think this does what you want but I'm not 100% sure. (Based on MS SQL Server too.)
create table #t
(
PKCol int identity(1,1),
Col1 varchar(200)
)
Insert Into #t
Values ('blah something')
Insert Into #t
Values ('blah something else')
Insert Into #t
Values ('foo blah')
Insert Into #t
Values ('bar blah')
Insert Into #t
Values ('foo hello')
Select t.*
From #t t
Join (
Select min(PKCol) as 'IDToSelect'
From #t
Group By Left(Col1, CharIndex(space(1), col1))
)q on t.PKCol = q.IDToSelect
drop table #t
If you are interested in the fastest possible query: It's relatively important to have an index on the first column of the table. That way the query processor can scan the values from that index. Then, the fastest solution is probably to use an 'outer' query to get the distinct c1 values, plus an 'inner' or nested query to get one of the possible values of the second column:
drop table test;
create table test(c1 char(20), c2 char(20));
create index idx_c1 on test(c1);
-- insert some data (H2 specific)
insert into test select 'bl' || (x/1000), x from system_range(1, 100000);
-- the fastest query (64 ms)
select c1, (select i.c2 from test i where i.c1=o.c1 limit 1) from test o group by c1;
-- the shortest query (385 ms)
select c1, min(c2) c2 from test group by c1;