i got this error with Oracle SQL with Insert Select query and don't where the error comes from
the SQL Query is:
insert into GroupScenarioAction (ID, ID_UUID, GPSCENARIO_UUID, ACTION, VERSION)
(select DEFAULT , '0', ACTION.ID_UUID, '5310AFAA......', '1', ACTION_ID, '0'
from ACTION where ACTION.id not in (select ACTION FROM GroupScenarioAction where
GPSCENARIO = '1'));
the error is ORA-00936: missing expression Position 129
It is difficult to assist because
you posted relevant data as images (why do you expect us to type all of that so that we could try it?) instead of code (which can easily be copy/pasted and used afterwards)
code you posted (the insert statement itself) uses columns that don't exist in any tables whose description you posted
for example, insert inserts into GroupScenarioAction, but there's no such table there; maybe it is goroohscenarioaction? Or, there's no action_id column in the action table
you're inserting values into 5 columns, but select statement contains 7 columns; that raises ORA-00913: too many values error, you don't even come to the missing expression error
Shortly, as if you tried to do everyhing you could to prevent us from helping you.
One of comments you posted says
It's the primary key so where are those values supposed to come from?
That's the default keyword in
insert into GroupScenarioAction (ID, ...)
(select DEFAULT, ...
-------
this
Looks like the ID column is created as an identity column whose value is autogenerated (i.e. Oracle takes care about it), which also means that you're on Oracle 12c or above (there was no such an option in lower versions). On the other hand create table goroohscenarioaction statement doesn't suggest anything like that.
Anyway: if you do it right, it works. I created two sample tables with a minimum column set, just to make insert work. Also, as I'm on 11gXE (which doesn't support identity columns, I'm inserting a sequence value which is, basically, what identity column uses in the background anyway):
SQL> create table groupscenarioaction
2 (id number,
3 id_uuid raw(255),
4 gpscenario_uuid raw(255),
5 action number,
6 version number
7 );
Table created.
SQL> create table action
2 (id_uuid raw(255),
3 id number
4 );
Table created.
SQL> create sequence seq;
Sequence created.
Insert you posted; I commented out columns that either don't exist or are superfluous. It works; though, didn't insert anything as my table(s) are empty, but it doesn't raise any error:
SQL> insert into GroupScenarioAction
2 (ID, ID_UUID, GPSCENARIO_UUID, ACTION, VERSION)
3 (select 1 /*DEFAULT*/ , '0', ACTION.ID_UUID, '5310AFAA......', '1' --, id /*ACTION_ID*/, '0'
4 from ACTION
5 where ACTION.id not in (select ACTION FROM GroupScenarioAction
6 where gpscenario_uuid/*GPSCENARIO*/ = '1'));
0 rows created.
Beautified:
SQL> insert into groupscenarioaction
2 (id, id_uuid, gpscenario_uuid, action, version)
3 (select seq.nextval, '0', a.id_uuid, '5310AFAA......', '1'
4 from action a
5 where a.id not in (select g.action
6 from groupscenarioaction g
7 where g.gpscenario_uuid = '1'));
0 rows created.
SQL>
Now that you know a little bit more about what's bothering use to help you, and if what I wrote isn't enough, consider editing the original question you posted (simply remove everything that's wrong and write something that is true and we can use).
Related
I am trying to insert rows into an Oracle 19c table that we recently added a GENERATED ALWAYS AS IDENTITY column (column name is "ID"). The column should auto-increment and not need to be specified explicitly in an INSERT statement. Typical INSERT statements work - i.e. INSERT INTO table_name (field1,field2) VALUES ('f1', 'f2'). (merely an example). The ID field increments when typical INSERT is executed. But the query below, that was working before the addition of the IDENTITY COLUMN, is now not working and returning the error: ORA-00947: not enough values.
The field counts are identical with the exception of not including the new ID IDENTITY field, which I am expecting to auto-increment. Is this statement not allowed with an IDENTITY column?
Is the INSERT INTO statement, using a SELECT from another table, not allowing this and producing the error?
INSERT INTO T.AUDIT
(SELECT r.IDENTIFIER, r.SERIAL, r.NODE, r.NODEALIAS, r.MANAGER, r.AGENT, r.ALERTGROUP,
r.ALERTKEY, r.SEVERITY, r.SUMMARY, r.LASTMODIFIED, r.FIRSTOCCURRENCE, r.LASTOCCURRENCE,
r.POLL, r.TYPE, r.TALLY, r.CLASS, r.LOCATION, r.OWNERUID, r.OWNERGID, r.ACKNOWLEDGED,
r.EVENTID, r.DELETEDAT, r.ORIGINALSEVERITY, r.CATEGORY, r.SITEID, r.SITENAME, r.DURATION,
r.ACTIVECLEARCHANGE, r.NETWORK, r.EXTENDEDATTR, r.SERVERNAME, r.SERVERSERIAL, r.PROBESUBSECONDID
FROM R.STATUS r
JOIN
(SELECT SERVERSERIAL, MAX(LASTOCCURRENCE) as maxlast
FROM T.AUDIT
GROUP BY SERVERSERIAL) gla
ON r.SERVERSERIAL = gla.SERVERSERIAL
WHERE (r.LASTOCCURRENCE > SYSDATE - (1/1440)*5 AND gla.maxlast < r.LASTOCCURRENCE)
) )
Thanks for any help.
Yes, it does; your example insert
INSERT INTO table_name (field1,field2) VALUES ('f1', 'f2')
would also work as
INSERT INTO table_name (field1,field2) SELECT 'f1', 'f2' FROM DUAL
db<>fiddle demo
Your problematic real insert statement is not specifying the target column list, so when it used to work it was relying on the columns in the table (and their data types) matching the results of the query. (This is similar to relying on select *, and potentially problematic for some of the same reasons.)
Your query selects 34 values, so your table had 34 columns. You have now added a 35th column to the table, your new ID column. You know that you don't want to insert directly into that column, but in general Oracle doesn't, at least at the point it's comparing the query with the table columns. The table has 35 columns, so as you haven't said otherwise as part of the statement, it is expecting 35 values in the select list.
There's no way for Oracle to know which of the 35 columns you're skipping. Arguably it could guess based on the identity column, but that would be more work and inconsistent, and it's not unreasonable for it to insist you do the work to make sure it's right. It's expecting 35 values, it sees 34, so it throws an error saying there are not enough values - which is true.
Your question sort of implies you think Oracle might be doing something special to prevent the insert ... select ... syntax if there is an identity column, but in facts it's the opposite - it isn't doing anything special, and it's reporting the column/value count mismatch as it usually would.
So, you have to list the columns you are populating - you can't automatically skip one. So you statement needs to be:
INSERT INTO T.AUDIT (IDENTIFIER, SERIAL, NODE, ..., PROBESUBSECONDID)
SELECT r.IDENTIFIER, r.SERIAL, r.NODE, ..., r.PROBESUBSECONDID
FROM ...
using the actual column names of course if they differ from the query column names.
If you can't change that insert statement then you could make the ID column invisible; but then you would have to specify it explicitly in queries, as select * won't see it - but then you shouldn't rely on * anyway.
db<>fiddle
Based on this topic I've encountered a problem with insertions.
My Tests table contains:
TestID Name
1 test_insert_film
2 test_insert_writer
3 test_insert_location
4 test_delete_film
5 test_delete_writer
6 test_delete_location
I want to insert into my TestTables the id's of the tests with the following sequence:
INSERT INTO TestTables(TestID)
SELECT TestID
FROM Tests
But I get:
Msg 515, Level 16, State 2, Line 1
Cannot insert the value NULL into column 'TableID', table 'FilmS.dbo.TestTables'; column does not allow nulls. INSERT fails. The statement has been terminated.
TestTables contains 4 columns, one of them being TestID. Why isn't this working?
The column TableID (!) in your table TestTables is not allowed NULL values! This column is not in the list of columns to be filled upon the INSERT, so the default value assumed is NULL. This is why you get the error.
You may need something like:
INSERT INTO TestTables(TestID, TableID)
SELECT TestID, '' FROM Tests
To fill the TableID column with a default value. Maybe also other columns in the TestTables table are affected and need to be treated similarly.
PS: You could also modify the the TestTables definition to provide a default value for the respective columns. If you do so you can leave the above statement as it is.
I have four tables.
PERSON DELIVERY_MAPPING GENERATION_SYSTEM DELIVERY_METHOD
------ ---------------- ----------------- ---------------
ID PERSON_ID ID ID
NAME GENERATION_SYSTEM_ID NAME NAME
DELIVERY_METHOD_ID IS_SPECIAL
Example data:
PERSON DELIVERY_MAPPING GENERATION_SYSTEM DELIVERY_METHOD
------ ---------------- ----------------- ---------------
1. TOM 1 1 1. COLOR PRINTER 1 1. EMAIL N
2. DICK 1 2 2. BW PRINTER 1 2. POST N
3. HARRY 2 3 3. HANDWRITTEN 3 3. PIGEONS Y
A DELIVERY_METHOD contains ways to deliver new letters — EMAIL, POST, PIGEON. The IS_SPECIAL column marks a record as a means of a special delivery. It is indicated by a simple value of Y or N. Only PIGEON is a special delivery method i.e. Y, the others are not i.e. N.
The GENERATION_SYSTEM has the information that will finally print the letter. Example values are COLOR PRINTER and DOT MATRIX PRINTER. Each GENERATION_SYSTEM will always be delivered using one of the DELIVERY_METHODs. There's a foreign key-between GENERATION_SYSTEM and DELIVERY_METHOD.
Now, each PERSON can have his letters generated by different GENERATION_SYSTEMs and since, it is a many-to-many relation, we have the DELIVERY_MAPPING table and that's that's why we have foreign key's on both ends.
So far, so good.
I need to ensure that it if a person has his letters generated by a system that uses a special delivery method then he cannot be allowed to have multiple generation systems in the mappings list. For example, Dick can't have his letters generated using the colour printer because he already gets all his handwritten letters delivery by a pigeon (which is a marked a special delivery method).
How would I accomplish such a constraint? I tried doing it with a before-insert-or-update trigger on the DELIVERY_MAPPING table but that causes the mutating trigger problem when updating.
Can is normalise this scenario even more? Maybe it is just that i haven't normalised my table properly.
Either way, I'd love to hear your take on this issue. I hope I've been verbose enough (...and if you can propose a better title for this post, that would be great)
For a complicated constraint like this, I think you need to use triggers. I don't think the mutating table problem is an issue, because you are either going to do an update or do nothing.
The only table you need to worry about is Delivery_Mapping. Before allowing a change to this table, you need to run a query on the existing table to get the number of specials and gs's:
select SUM(case when dme.is_special = 'Y' then 1 else 0 end) as NumSpecial,
count(distinct gs.id) as NumGS,
MIN(gs.id) as GSID
from delivery_mapping dm join
generation_system gs
on dm.generation_system_id = gs.id join
delivery_method dme
on gs.delivery_method_id = dme.id
where dm.person_id = PERSONID
With this information, you can check if the insert/update can proceed. I think you need to
check the conditions:
If NumSpecial = 0 and the new delivery method is not special, then proceed.
If NumSpecial = 0 and NumGS = 0, then proceed.
Otherwise fail.
The logic is a bit more complicated for updates.
By the way, I prefer to wrap updates/inserts/deletes in stored procedures, so logic like this doesn't get hidden in triggers. I find that debugging and maintaining procedures is much easier than dealing with triggers, which may be possibly cascading.
I'd avoid triggers on the base tables for this unless you can guarantee serialization.
you could use an API (best way) as Gordon says (again, be sure to serialize) or if that isn't suitable, use a materialized view (we don't need to serialize here, as the check is done on commit):
SQL> create materialized view log on person with rowid, primary key including new values;
Materialized view log created.
SQL> create materialized view log on delivery_mapping with rowid, primary key including new values;
Materialized view log created.
SQL> create materialized view log on generation_system with rowid, primary key (delivery_method_id) including new values;
Materialized view log created.
SQL> create materialized view log on delivery_method with rowid, primary key (is_special) including new values;
Materialized view log created.
we create a materialized view to show the counts of special + non special links for each user:
SQL> create materialized view check_del_method
2 refresh fast on commit
3 with primary key
4 as
5 select pers.id, count(case del_meth.is_special when 'Y' then 1 end) special_count,
6 count(case del_meth.is_special when 'N' then 1 end) non_special_count
7 from person pers
8 inner join delivery_mapping del_map
9 on pers.id = del_map.person_id
10 inner join generation_system gen
11 on gen.id = del_map.generation_system_id
12 inner join delivery_method del_meth
13 on del_meth.id = gen.delivery_method_id
14 group by pers.id;
Materialized view created.
the MView is defined as fast refresh on commit, so the modified rows are rebuilt on commit. now the rule is that if special+non special counts are non-zero, that's an error condition.
SQL> create trigger check_del_method_aiu
2 after insert or update on check_del_method
3 for each row
4 declare
5 begin
6 if (:new.special_count > 0 and :new.non_special_count > 0)
7 then
8 raise_application_error(-20000, 'Cannot have a mix of special and non special delivery methods for user ' || :new.id);
9 end if;
11 end;
12 /
Trigger created.
SQL> set serverout on
SQL> insert into delivery_mapping values (1, 3);
1 row created.
SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-20000: Cannot have a mix of special and non special delivery methods for
user 1
ORA-06512: at "TEST.CHECK_DEL_METHOD_AIU", line 6
ORA-04088: error during execution of trigger 'TEST.CHECK_DEL_METHOD_AIU'
CREATE MATERIALIZED VIEW special_queues_mv
NOLOGGING
CACHE
BUILD IMMEDIATE
REFRESH ON COMMIT
ENABLE QUERY REWRITE
AS SELECT dmap.person_id
, SUM(DECODE(dmet.is_special, 'Y', 1, 0)) AS special_queues
, SUM(DECODE(dmet.is_special, 'N', 1, 0)) AS regular_queues
FROM delivery_mapping dmap
, generation_system gsys
, delivery_method dmet
WHERE dmap.generation_system_id = gsys.id
AND gsys.delevery_method_id = dmet.id
GROUP
BY dmap.person_id
/
ALTER MATERIALIZED VIEW special_queues_mv
ADD ( CONSTRAINT special_queues_mv_chk1 CHECK ((special_queues = 1 AND regular_queues = 0) OR ( regular_queues > 0 AND special_queues = 0 ) ) ENABLE VALIDATE)
/
That's how I did it. DazzaL's answer gave me a hint on how to do it.
Hey so I'm trying to create a new record based on one which already exists, but I'm having trouble ensuring that this record doesn't already exist. The database stores details of transactions and has a field for whether it repeats.
I'm generating the new dates using
SELECT datetime(transactions.date, '+'||repeattransactions.interval||' days')
AS 'newdate' FROM transactions, repeattransactions WHERE
transactions.repeat = repeattransactions.id
Initially I'd like to use SELECT instead of INSERT just for debugging purposes. I've tried to do something with GROUP BY or COUNT(*) but the exact logic is eluding me. So for instance I've tried
SELECT * FROM transactions, (SELECT transactions.id, payee, category, amount,
fromaccount, repeat, datetime(transactions.date,
'+'||repeattransactions.interval||' days') AS 'date' FROM transactions,
repeattransactions WHERE transactions.repeat = repeattransactions.id)
but this obviously treats the two tables as though they were joined, it doesn't append the records to the bottom of the table which would let me group.
I've been struggling for days so any help would be appreciated!
EDIT:
The docs say "The ON CONFLICT clause applies to UNIQUE and NOT NULL constraints". I can't have a unique constraint on the table by definition as I have to create the exact same record for a new transaction with just a new date. Or have I misinterpreted this?
EDIT2:
id payee repeat date
1 jifoda 7 15/09/2011
2 jifoda 7 22/09/2011 <-- allowed as date different within subset
3 grefa 1 15/09/2011 <-- allowed as only date is not-unique
4 grefa 1 15/09/2011 <-- not allowed! exactly same as id 3
SQLite has special DDL statement for duplicates. Like:
create table test(a TEXT UNIQUE ON CONFLICT IGNORE);
insert into test values (1);
insert into test values (1);
select count(*) from test;
1
It can be also ON CONFLICT REPLACE. See docs.
I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are:
Delete both rows and re-insert them.
Update rows with some other value
and swap and then update to actual value.
But I don't want to go for these as they do not seem to be the appropriate solution to the problem.
Could anyone help me out?
The magic word is DEFERRABLE here:
DROP TABLE ztable CASCADE;
CREATE TABLE ztable
( id integer NOT NULL PRIMARY KEY
, payload varchar
);
INSERT INTO ztable(id,payload) VALUES (1,'one' ), (2,'two' ), (3,'three' );
SELECT * FROM ztable;
-- This works, because there is no constraint
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
ALTER TABLE ztable ADD CONSTRAINT OMG_WTF UNIQUE (payload)
DEFERRABLE INITIALLY DEFERRED
;
-- This should also work, because the constraint
-- is deferred until "commit time"
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
RESULT:
DROP TABLE
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "ztable_pkey" for table "ztable"
CREATE TABLE
INSERT 0 3
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
UPDATE 2
id | payload
----+---------
1 | one
2 | three
3 | two
(3 rows)
NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index "omg_wtf" for table "ztable"
ALTER TABLE
UPDATE 2
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of.
If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful.
But in short: there is no other solution than the ones you provided.
Further to Andy Irving's answer
this worked for me (on SQL Server 2005) in a similar situation
where I have a composite key and I need to swap a field which is part of the unique constraint.
key: pID, LNUM
rec1: 10, 0
rec2: 10, 1
rec3: 10, 2
and I need to swap LNUM so that the result is
key: pID, LNUM
rec1: 10, 1
rec2: 10, 2
rec3: 10, 0
the SQL needed:
UPDATE DOCDATA
SET LNUM = CASE LNUM
WHEN 0 THEN 1
WHEN 1 THEN 2
WHEN 2 THEN 0
END
WHERE (pID = 10)
AND (LNUM IN (0, 1, 2))
There is another approach that works with SQL Server: use a temp table join to it in your UPDATE statement.
The problem is caused by having two rows with the same value at the same time, but if you update both rows at once (to their new, unique values), there is no constraint violation.
Pseudo-code:
-- setup initial data values:
insert into data_table(id, name) values(1, 'A')
insert into data_table(id, name) values(2, 'B')
-- create temp table that matches live table
select top 0 * into #tmp_data_table from data_table
-- insert records to be swapped
insert into #tmp_data_table(id, name) values(1, 'B')
insert into #tmp_data_table(id, name) values(2, 'A')
-- update both rows at once! No index violations!
update data_table set name = #tmp_data_table.name
from data_table join #tmp_data_table on (data_table.id = #tmp_data_table.id)
Thanks to Rich H for this technique.
- Mark
Assuming you know the PK of the two rows you want to update... This works in SQL Server, can't speak for other products. SQL is (supposed to be) atomic at the statement level:
CREATE TABLE testing
(
cola int NOT NULL,
colb CHAR(1) NOT NULL
);
CREATE UNIQUE INDEX UIX_testing_a ON testing(colb);
INSERT INTO testing VALUES (1, 'b');
INSERT INTO testing VALUES (2, 'a');
SELECT * FROM testing;
UPDATE testing
SET colb = CASE cola WHEN 1 THEN 'a'
WHEN 2 THEN 'b'
END
WHERE cola IN (1,2);
SELECT * FROM testing;
so you will go from:
cola colb
------------
1 b
2 a
to:
cola colb
------------
1 a
2 b
I also think that #2 is the best bet, though I would be sure to wrap it in a transaction in case something goes wrong mid-update.
An alternative (since you asked) to updating the Unique Index values with different values would be to update all of the other values in the rows to that of the other row. Doing this means that you could leave the Unique Index values alone, and in the end, you end up with the data that you want. Be careful though, in case some other table references this table in a Foreign Key relationship, that all of the relationships in the DB remain intact.
I have the same problem. Here's my proposed approach in PostgreSQL. In my case, my unique index is a sequence value, defining an explicit user-order on my rows. The user will shuffle rows around in a web-app, then submit the changes.
I'm planning to add a "before" trigger. In that trigger, whenever my unique index value is updated, I will look to see if any other row already holds my new value. If so, I will give them my old value, and effectively steal the value off them.
I'm hoping that PostgreSQL will allow me to do this shuffle in the before trigger.
I'll post back and let you know my mileage.
In SQL Server, the MERGE statement can update rows that would normally break a UNIQUE KEY/INDEX. (Just tested this because I was curious.)
However, you'd have to use a temp table/variable to supply MERGE w/ the necessary rows.
For Oracle there is an option, DEFERRED, but you have to add it to your constraint.
SET CONSTRAINT emp_no_fk_par DEFERRED;
To defer ALL constraints that are deferrable during the entire session, you can use the ALTER SESSION SET constraints=DEFERRED statement.
Source
I usually think of a value that absolutely no index in my table could have. Usually - for unique column values - it's really easy. For example, for values of column 'position' (information about the order of several elements) it's 0.
Then you can copy value A to a variable, update it with value B and then set value B from your variable. Two queries, I know no better solution though.
Oracle has deferred integrity checking which solves exactly this, but it is not available in either SQL Server or MySQL.
1) switch the ids for name
id student
1 Abbot
2 Doris
3 Emerson
4 Green
5 Jeames
For the sample input, the output is:
id student
1 Doris
2 Abbot
3 Green
4 Emerson
5 Jeames
"in case n number of rows how will manage......"