Effective way to move data from one table into multiple tables - sql

I have TableA that has millions of records and 40 columns.
I would like to move:
- columns 1-30 into Table B
- columns 31-40 into Table C
This multiple Insert question shows how I would assume I should do it
INSERT INTO TableB (col1, col2, ...)
SELECT c1, c2,...
FROM TableA...
I wanted to know if there was a different/quicker way I could pass the data. Essentially, I don't want to wait for One table to finish Insert processing before the other Insert statement starts to execute

I'm afraid there is no way in the SQL standard to have what is often called a T junction at the end of an INSERT .. SELECT. This, I'm afraid, is the privilege of ETL tools. But the ETL tools connect twice to the database, once for each leg of the T junction, and the resulting two INSERT INTO tab_x VALUES (?,?,?,?) statements run in parallel.
Which brings me to a possible solution that could make sense:
Create two scripts. One goes INSERT INTO table_b1 SELECT col1,col2 FROM table_a;. One goes INSERT INTO table_b2 SELECT col3,col4 FROM table_a;. Then, as it's SQL server, launch two isql sessions in parallel, each running their own script.

Related

Merge into multiple table using two different SELECT clause from a WITH statement (ORACLE)

I have a procedure refreshed daily, I want to insert / or merge into 2 table from a single source, but I don't want to create one then delete and insert data to a table just only for the source (T1 table) (I don't use that source table for reporting and it have many rows so the procedure might run slowly)
My desired result is:
Merge into or insert into 2 existing tables using
with t1 table
and two select query from that t1 table (the two select query have different purpose, one is map the detail (supposed group by ID) and one is group by month,... etc from t1 table)
From what I have researched so far, insert all only support single query and insert to multiple table, and merge into only merge into 1 table with 1 single query.
How to combine these?

SQL: Insert certain records from one table into another and also add few other fields using query

I have two tables say TABLE1 and TABLE2. And say the field id is common in both. Rest of field are different.
I now select all distinct id from TABLE1 and want to insert them into TABLE2 while also writing its other attributes. Like the pseudocode below.
for each distinct id (i) in TABLE1:
INSERT in TABLE2 (i, false, unix_timestamp())
end
Now I for some reason cannot use a programming language to do this. Is it possible to do this in SQL using Apache Drill?
What you could do is write a query that produces the output you're looking for and then save that as a table. Drill is really a query engine and doesn't support INSERT operations the way a database does.
So a pseudo query migth look like this:
CREATE TABLE <your file> AS
SELECT ...
Then you could query that file. I don't know if that helps or not. You can also create views and temporary tables, but Drill itself doesn't really implement INSERT commands.

How to run a SQL statement every time a row is created?

We are using an ERP system which uses SQL Server. There is a function which creates a row 'A' in a specific table and populates it with data from another row 'B' of another table. For some reason the programmer thought, one would need only certain values of 'B' in 'A'. So only the values of some columns in 'B' are copied.
Now I want more columns to be copied than the program copies. The columns are there but they don't get copied.
The program offers a way to run a script before the SQL statement, which creates the row, is executed. So the problem here is, I don't know the id of the row which will be created. And even if I would, the row isn't created yet to alter it.
Is there a way in SQL Server to run a SQL script every time after a row is created in a specific table?
Thanks for the help.
Yes - those are called triggers.
You can write triggers that get fired after INSERT, UPDATE or DELETE - or they can be INSTEAD OF triggers, too - if you need to completely take control of an operation.
In your case, I believe an AFTER INSERT trigger should be just fine:
CREATE TRIGGER TrgCopyAdditionalColumns
ON dbo.TableA
AFTER INSERT
AS
-- the newly inserted row (there could be **multiple!**)
-- will be stored in the `Inserted` pseudo table, which has the
-- exact same structure as your "TableB" table - just pick out
-- the columns you need to insert into "TableA" from here
INSERT INTO dbo.TableA (Col1, Col2, ..., ColN)
SELECT
b.Col1, b.Col2, ..., b.ColN
FROM
dbo.TableB AS b
INNER JOIN
-- somehow, you need to connect your Table B's rows to the
-- newly inserted rows for Table A that are present in the
-- "Inserted" pseudo table, to get only those rows of data from
-- Table B that are relevant to the newly inserted Table A rows
Inserted i ON b.A_ID = i.ID

SQL Trigger Inserting from Multiple tables

I am trying to execute a query within a SQL trigger.
I have 4 tables A, B, C, D. Table A is a lookup list and contains roughly 1400 rows of data. Table B are values being input through an HMI with a timestamp. Table C is the table where my values are intended to go. Table D is a list of multipliers to use to multiply values from table A to table B (I am only using one multiplier from table D at the moment).
When a user inputs data into table B, that should trigger the procedure to get the values that were inserted (including the itemnumber) and relate the itemnumber to table A and use table D to multiply a few things together to send values to Table C. If I only input 3 rows of data in table B for example, I should only get three rows of data in table C. I am merely using table A to match the item number and get some data. But for some reason I am inserting way more records than intended, over 1600 rows.
Table D multipliers have a timestamp that does not match or have any correlation with any other table. So I am using a timestamp and selecting the multipliers that are closest to the timestamp from table B (some multipliers will change throughout time and I need a historical multiplier to correctly multiply the right things together)
Your help is most appreciated. Thank you.
Insert into TableC( ItemNumber, Cases, [Description], [Type], Wic, Elc, TotalElc, LbsPerCase, TotalLbs, PeopleRequired, ScheduleHours, Rated, Capacity, [TimeStamp])
Select
b.ItemNumber, b.CaseCount, a.ItemDescription, a.DivisionCode, a.workcenter,
a.LaborPercase as ELC, b.CaseCount * a.LaborPerCase * d.IpCg,
a.LbsPerCase, a.LaborPerCase * b.CaseCount as TotalLbs,
a.PersonReqd, b.Schedulehours, a.PoundRating,
b.ScheduleHours * a.PoundRating as Capactity, b.shift, GETDATE()
from
TableA a, TableB b, TableD
Where
a.itemnumber = b.itemnumber
and d.IpCG < b.TimeStamp
and b.CasesCount > 0
You do not reference the inserted or deleted tables that are available only in the trigger, so of course you are returning more records tha you need in your query.
When first writing a trigger, what I do is create a temp table called #inserted (and/or #deleted) and populate it with several records. It should match the design of the table that the trigger will be on. It is important to make your temp table have several input records that might meet the various criteria that affect your query (so in your caseyou want some where the case count would be 0 and some where it would not for instance) and that would be typical of data inserted into the table or updated init. SQL server triggers operate on sets of data, so this also ensures that your trigger can properly handle multiple record uiinserts or updates. A properly written trigger would have test cases you need to test to make sure everything happens correctly, your #inserted table should include records that meet all those test cases.
Then write the query in a transaction (and roll it back while you are testing) joining to #inserted. If you are doing an insert with a select, only write the select part until you get that right, then add the insert. For testing, write a select from the table you are inserting to in order to see the data you inserted before you rollback.
Once you get everything working, change the #inserted references to inserted, remove any testing code and of course the rollback (possibly the whole transaction depednig on what you are doing.) and add the drop and create trigger part of the code. Now you can test you trigger as a trigger, but you are in good shape becasue you know that it is likely to work from your earlier testing.

Query select a bulk of IDs from a table - SQL

I have a table which holds ~1M rows. My application has a list of ~100K IDs which belong to that table (the list being generated by the application layer).
Is there a common-method of how to query all of these IDs? ~100K Select queries? A temporary table which I insert the ~100K IDs to, and Select query via join the required table?
Thanks,
Doori Bar
You could do it in one query, something like
SELECT * FROM large_table WHERE id IN (...)
Insert a comma-separated list of IDs where I put the ...
Unfortunately, there is no easy way that I know of to parametrize this, so you need to be extra-super careful to avoid SQL injection vulnerabilities.
A temporary table which holds the 100k IDs seems like a good solution. Don't insert them one by one though ; INSERT ... VALUES syntax in MySQL accepts the insertion of multiple rows.
By the way, where do you get your 100k IDs, if it's not from the database ? If they come from a preceding request, I'd suggest to have it fill the temporary table.
Edit : For a more portable way of multiple insert :
INSERT INTO mytable (col1, col2) SELECT 'foo', 0 UNION SELECT 'bar', 1
Do those id's actually reference the table with 1M rows?
If so, you could use SELECT * ids FROM <1M table>
where ids is the ID column and where "1M table" is the name of the table which holds the 1M rows.
but I don't think I really understand your question...