Can anybody help me with the syntax?
insert into history (company,partnumber,price)
values ('blah','IFS0090','0.00')
if company NOT IN ('blah','blah2','blah3','blah4','blah4')
and partnumber='IFS0090';
Background:
I have a history table which stores daily company, products and prices. But sometimes a company will remove itself for a few days. Complicating the issue is because I'm only saving daily CHANGES to prices only and not snapshotting the entire days list (the data would be huge) when I display the data the company will still come up for the previous days price. So I need to do something like this, where a 0.00 price means they're no longer there.
Use:
INSERT INTO HISTORY
(company, partnumber, price)
SELECT 'blah', 'IFS0090','0.00'
FROM HISTORY h
WHERE h.company NOT IN ('blah','blah2','blah3','blah4','blah4')
AND h.partnumber = 'IFS0090'
You are mixing two completely different concepts in your statement. Choose one:
Either you want to INSERT constant values (in that case make your checks in your programming language and generate the INSERT INTO ... VALUES (...) accordingly)
or insert the filtered contents of another table.
The latter is possible in MySQL (that's the INSERT ... SELECT syntax), the query would look like this:
INSERT INTO history (...)
SELECT ...
FROM liveTable
INNER JOIN moreTables ...
--# this is a regular SELECT statement, as you might have guessed by now
WHERE company NOT IN ('blah','blah2','blah3','blah4','blah4')
AND partnumber='IFS0090';
Related
I'm working on a project tracking grocery expenses. I have the following tables with predefined values already inserted:
Store (where we bought the food)
Shopper (me or my wife)
Category (of food)
I also have tables that are awaiting input.
They are:
Receipt (one shopping trip with multiple food items)
Food (each food item)
FoodReceipt (bridge table between Receipt and Food)
I have my constraints set up the way I need them, but I am at a bit of a loss when it comes to writing an INSERT statement that would allow me to insert a new record that references values in the other tables. Any thoughts would be greatly appreciated.
Thanks!
SCOPE_IDENTITY will give you the single value of the last identity. While it may well work in this case, that isn't necessarily the best approach in the general case when you want to insert in sets.
I'd consider using the OUTPUT clause to output the inserted items ID into a temp table, then you can join them back to the subsequent inserts.
`INSERT INTO... OUTPUT INSERTED.ID INTO tempIDs
INSERT INTO other_table inner join tempIDs ...`
Wrap it up in a SP.
https://learn.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql?view=sql-server-2017
Using the final three tables as an example, insert into the reciept table and use the ScopeIdentity() function to get the id (you have to use an identity column as the primary key). Repeat the following for each food item - insert into the food table, use ScopeIdentity to get the primary key and then insert a row into FoodReceipt the saved value for the receipt and the saved value for the current food row.
I need to plug a load of data from a separate program into a single table (in Oracle SQL Developer). This transfer of data is going to be in one direction, meaning the system will just occasionally dump a load of data in the table, replacing what was there before. I therefore don't have to worry about being able to update individual fields. I also can't modify how this system transfers the data into my table, which means I am stuck with mapping its fields to my column headers (it's just sending the data using INSERTs behind the scenes).
I want the table to have a unique TRANSACTION_ID column. However, each TRANSACTION_ID might have multiple TRANSACTION_TYPEs, so I will receive multiple rows for each ID with a different TRANSACTION_TYPE. e.g:
INSERT INTO TEST_TABLE (TRANSACTION_ID, TRANSACTION_TYPE) VALUES (1000, TT35)
INSERT INTO TEST_TABLE (TRANSACTION_ID, TRANSACTION_TYPE) VALUES (1000, TT40)
INSERT INTO TEST_TABLE (TRANSACTION_ID, TRANSACTION_TYPE) VALUES (1000, TT12)
INSERT INTO TEST_TABLE (TRANSACTION_ID, TRANSACTION_TYPE) VALUES (1001, TT12)
......etc.
I want to concatenate these into a single field separated by commas, so the final table would look like:
TRANSACTION_ID TRANSACTION_TYPES
-----------------------------------------
1000 TT35,TT40,TT12
1001 TT12
1002 TT40,TT23
I realise that this is de-normalising the data, but since I do not need to update it I am not overly concerned.
I understand the way to do this usually is by using a MERGE, but since I am stuck with the INSERT actions of the source system I cannot use this. Is it possible to do this using a trigger? I've run into mutating table errors etc. in my previous attempts.
The last resort might be to store the TRANSACTION_TYPEs in a separate table, treat the data, and then delete the second table, but that seems ridiculously over-complicated.
Is there a straight-forward way of doing this that I'm missing?
Thanks
This is too long for a comment.
You probably could do this with a trigger, but I wouldn't recommend it. The trigger would need to replace the insert, sometimes doing an insert and sometimes concatenating the values.
Two other options. First, load the data into a staging table and then create a new table that your process.
The second is to just ignore the problem and use list_agg() to bring the data together when you are querying it.
I have a table tt_auth_itm (autorization items) with integers tt_auth_id (authorization profile ID) and tt_func_id (functionality ID) and some other fields.
For each unique tt_auth_id (authorization profile) records have to be present for a range of tt_func_id (all existing functionalities, not a continuous range BTW).
But some records are missing (colored areas in picture; it lists all combinations of tt_auth_itm and tt_func_id, two columns at a time):
I know that for tt_auth_id=1 records for all required tt_func_id are present.
To insert the missing records for tt_auth_id=2 I can do this, copying the other field values from the corresponding tt_func_id record where tt_auth_id=1:
insert into tt_auth_item (tt_auth_id, tt_func_id, other, fields)
select 2 as tt_auth_id, tt_func_id, other, fields
from tt_auth_itm o where tt_auth_id=7
and not exists
(select * from tt_auth_itm i where i.tt_auth_id=2 and o.tt_func_id=i.tt_func_id)
Now I want to extend this into one insert statement for all the other tt_auth_id values that have missing records.
I can't get my head around that. I tried to simply change the above query to:
insert into tt_auth_item (tt_auth_id, tt_func_id, other, fields)
select tt_auth_id, tt_func_id, other, fields
from tt_auth_itm o where tt_auth_id=1
and not exists
(select * from tt_auth_itm i where (i.tt_auth_id=o.tt_auth_id) and (o.tt_func_id=i.tt_func_id))
but that is not sufficient (the subselect is empty), the code needs another level of indirection....
What SQL statement correctly extends my '2' case to the generic one?
Please use generic SQL, because I need it for SQL Server, Oracle, Firebird
I wrote an application for resident housing at a college. In one of the tables (rooms) I have a list of all the rooms and their current/max occupancy. Now, I've added a new column called "semester" and set all of the existing rows to have a semester value of "fall." Now I want copy and paste all of these rows into the table but change the semester value to "spring." The result should be twice as many rows as I started with - half with fall in the semester value and half with spring. Wondering what the best way to accomplish this is?
INSERT INTO rooms
(roomname, current_occupancy, max_occupancy, semester)
SELECT roomname, current_occupancy, max_occupancy,'spring'
FROM rooms
WHERE [semester]='fall'
(assuming names for your room and occupancy columns)
Use a temp table to make it simple no matter how many columns are involved;
SELECT * INTO #ROOMS FROM ROOMS;
UPDATE #ROOMS SET SEMESTER='spring';
INSERT INTO ROOMS SELECT * FROM #ROOMS;
Insert Into Rooms
Select col1, col2, col3, 'Spring' as Semester -- select each column in order except 'Semester', pass it in literally as 'Spring'
From rooms where
Semester = 'Fall'
Well if you're just trying to do this inside Sql Server Management Studio you could copy the table, run an Update command and set the semester to spring on the cloned table, then use the wizard to append the data from the cloned table to the existing table.
If you know a programming language you could pull all of the data, modify the semester, then insert the data into the existing table.
Note: The other answers are a much better way of achieving this.
I understand that what I am asking for may not make a lot of sense, but I none the less have a particular need for it. I have a table that has 500 rows in it. I have another table that has 500 more rows, that I need to merge into the first table. The easiest way I know how to do that is to add 500 rows to the first table, and then use an update statement because then I have a primary key to use to pair the first and second tables.
So how can I add 500 blank rows to my first table? I've been trying to think of a query that would do that, but haven't been able to come up with anything...
You can insert to one table from another table:
INSERT INTO suppliers (supplier_id, supplier_name)
SELECT account_no, name
FROM customers
WHERE city = 'Newark';
You can use insert into statement:
SQlite: select into?
As long as the tables contain the same data structure, you can use a simple query to insert them into your table:
INSERT INTO tableOne SELECT * FROM tableTwo
If you have to manually map the fields, you'll have to change it to the field level insert, such as:
INSERT INTO tableOne(columnOne,columnTwo) SELECT column3, column4 FROM tableTwo
You can add the standard WHERE statements to these as well.
Hope that helps.