ORA-00947 - not enough values: Occurs in one server but not another - sql

I am work on a project which has to add one column to the exist table.
It is like this:
The OLD TBL Layout
OldTbl(
column1 number(1) not null,
column2 number(1) not null
);
SQL TO Create the New TBL
create table NewTbl(
column1 number(1) not null,
column2 number(1) not null,
**column3 number(1)**
);
When I try to insert the data by the SQL below,
on one oracle server,it was successful executed,
but on another oracle server, I got "ORA-00947 error: not enough values"
insert into NewTbl select
column1,
column2
from OldTbl;
Is there any oracle option may cause this kind of difference in oracle?

ORA-00947: not enough values
this is the error you received, which means, your table actually has more number of columns than you specified in the INSERT.
Perhaps, you didn't add the column in either of the servers.
There is also a different syntax for INSERT, which is more readable. Here, you mention the column names as well. So, when such a SQL is issued, unless a NOT NULL column is missed out, the INSERT still work, having null updated in missed columns.
INSERT INTO TABLE1
(COLUMN1,
COLUMN2)
SELECT
COLUMN1,
COLUMN2
FROM
TABLE2

insert into NewTbl select
column1,
column2
from OldTbl;
The above query is wrong, because your new table has three columns, however, your select has only two columns listed. Had the number and the order of the columns been same, then you could have achieved it.
If the number of the columns, and the order of the columns are different, then you must list down the column names in the correct order explicitly.
I would prefer CTAS(create table as select) here, it would be faster than the insert.
CREATE TABLE new_tbl AS
SELECT column1, column2, 1 FROM old_tbl;
You could use NOLOGGING and PARALLEL to increase the performance.
CREATE TABLE new_tbl NOLOGGING PARALLEL 4 AS
SELECT column1, column2, 1 FROM old_tbl;
This will create the new table will 3 columns, the first two columns will have data from the old table, and the third column will have value as 1 for all rows. You could keep any value for the third column as per your choice. I kept it as 1 because you wanted the third column as data type NUMBER(1).

Related

What is a quick way to insert test data into tables in Oracle/Generate insert statements with different values (SQL)?

I'm trying to populate my tables with test data, but I'm looking for a way to do so without copying and pasting the same insert statement for each table repeatedly for ages and changing the values.
Is there a simple and fast way to create a bunch of a INSERT statements with different data for each column, perhaps getting data from a spreadsheet and inserting them into a insert statement?
You can create sample data very easily using the various functions in the DBMS_RANDOM package.
CREATE TABLE test_data
AS
SELECT DBMS_RANDOM.VALUE (), DBMS_RANDOM.string ('x', 20)
FROM DUAL
CONNECT BY LEVEL <= 100;
Create 2 tables, one to contain the data you are going to using in your tests and the second is the one your queries are actually going to use:
CREATE TABLE test_data_sources (
test_id NUMBER,
column1 NUMBER,
column2 DATE,
column3 VARCHAR2(20)
);
CREATE TABLE my_table (
column1 NUMBER,
column2 DATE,
column3 VARCHAR2(20)
);
Then, if you want to set the data for your first test:
DELETE FROM my_table;
-- or TRUNCATE my_table;
INSERT INTO my_table ( column1, column2, column3 )
SELECT column1, column2, column3
FROM test_data_sources
WHERE test_id = 1; -- replace with the id of whichever test you want to perform.
Then you can run your test against the MY_TABLE table with the appropriate data and then repeat and replace the data in the table with the data for the next test.
You need to populate TEST_DATA_SOURCES once (you can generate the DML statements from a spreadsheet if you want) with the appropriate data for each test but then it will be there to re-use each time you want to re-run the test.
If you have access to a directory object and a spreadsheet, you could convert that spreadsheet into a CSV, and then load it as an external table. Once it's in an external table, you could do something like
INSERT INTO my_table ( column1, column2, column3 )
SELECT column1, column2, column3
FROM <EXTERNAL-TABLE-NAME-GOES-HERE>
WHERE test_id = 1;
If you're using Oracle 18 or newer, you can use the answer provided here: https://stackoverflow.com/a/49077724/1257557
which looks like this:
SELECT time_id, prod_id, quantity_sold, amount_sold
FROM EXTERNAL (
(time_id DATE NOT NULL,
prod_id INTEGER NOT NULL,
quantity_sold NUMBER(10,2),
amount_sold NUMBER(10,2))
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_dir1
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY '|') -- You'll want to change this to a comma, if it's a CSV
LOCATION ('sales_9.csv') REJECT LIMIT UNLIMITED) sales_external;
It is worth noting that this solution requires you to have access to the file system on the database server so you can place the file(s) in whatever folders that the DB needs to read them from. If you don't have that access, then this option will not work.
If you're working with spreadsheets, you could consider creating columns or a macro that generates the insert statements for you once you've updated the spreadsheet, then you just copy/paste your statements from there to SQL Developer/SQLPlus/whatever.

SQL Server : INSERT INTO SELECT doesn't insert into the correct column

I'm using SQL Server 2012 to try to take the values of one column in a table and put them into the values of another column table in another. If I try to run the following query:
INSERT INTO table2 (column3)
SELECT column3
FROM table1
WHERE (ScopeID IS NOT NULL)
ORDER BY Name
For table2, column3 is the same type (an int), NULL values are allowed. But when I try to execute the query, it returns:
Cannot insert the value NULL into column 'column1', table 'dbo.table2';, column does not allow nulls. INSERT fails.
But I'm not trying to insert into column1... Is it just a syntax thing where the order of the columns HAVE to match?
You are inserting into column1. Remember, you are inserting entire rows of values, so you should really have a value for all columns. Your query is equivalent to:
INSERT INTO table2 (column1, column2, column3)
SELECT NULL, NULL, column3
FROM table1
WHERE (ScopeID IS NOT NULL)
ORDER BY Name;
(and so on for all the columns in the table.)
I am guessing that you actually want an update, but your question doesn't provide enough information to give further guidance.

INSERT statement is changing integers to null values

I'm creating a table with 3 million rows of data and 9 columns.
I am using the following syntax to insert my data.
INSERT INTO myTable
( column1
column2
column3
...
column11
problemColumn
)
Select
<exampleQuery>
1 column (which I will refer to as problemColumn) inserts 1.2 million null values into this table
When I run exampleQuery on its own (not inserting it into the table), problemColumn returns 0 null values.
problemColumn is correctly defined as an integer when the table is created
problemColumn has 300,000 distinct values. Each value appears in the table at least once, which means that it can't be an issue of a poorly-formatted value
There is no obvious pattern of values being systematically deleted
Edit: Some additional clarifications:
There are no calculations or joins done on problemColumn. I am simply selecting that variable from another table
problemColumn is an integer in the source table, so it is not an issue of a mismatched variable type
Could this be an issue with the size of the table in the database? I cannot comprehend why a query would fundamentally change from an insert statement.
Most likely cause (I've done it myself) is fat fingers - the column you're inserting is in the wrong position. Hard to verify without seeing the actual code, but it might be as simple as:
insert into table
(column1,
column2)
select
(column2,
column1)
from somewhere
Second possibility - there's a trigger on the destination table, which is changing the data. One of the many reasons I hate triggers.
I don't know Teradata, but the point of an RDBMS is to be able to handle exactly this scenario, so it's very unlikely it's anything to do with the size. To verify this, please try to limit the query to 1 result, and see what happens.
If that doesn't work, please convert the results of that query into an insert statement using "values"
INSERT INTO myTable
( column1
column2
column3
...
column11
problemColumn
)
values
(....)

How to select data and insert those data using single sql?

I want to select some data using simple sql and insert those data into another table. Both table are same. Data types and column names all are same. Simply those are temporary table of masters table. Using single sql I want to insert those data into another table and in the where condition I check E_ID=? checking part. My another problem is sometime there may be any matching rows in the table. In that time is it may be out sql exception? Another problem is it may be multiple matching rows. That means one E_ID may have multiple rows. As a example in my attachment_master and attachments_temp table has multiple rows for one single ID. How do I solve those problems? I have another problem. My master table data can insert temp table using following code. But I want to change only one column and others are same data. Because I want to change temp table status column.
insert into dates_temp_table SELECT * FROM master_dates_table where e_id=?;
In here all data insert into my dates_temp_table. But I want to add all column data and change only dates_temp_table status column as "Modified". How should I change this code?
You could try this:
insert into table1 ( col1, col2, col3,.... )
SELECT col1, col2, col3, ....
FROM table2 where (you can check any condition here on table1 or table2 or mixed)
For more info have a look here and this similar question
Hope it may help you.
EDit : If I understand your requirement properly then this may be a helpful solution for you:
insert into table1 ( col-1, col-2, col-3,...., col-n, <Your modification col name here> )
SELECT col-1, col-2, col-3,...., col-n, 'modified'
FROM table2 where table1.e_id=<your id value here>
As per your comment in above other answer:
"I send my E_ID. I don't want to matching and get. I send my E_ID and
if that ID available I insert those data into my temp table and change
temp table status as 'Modified' and otherwise don't do anything."
As according to your above statements, If given e_id is there it will copy all the columns values to your table1 and will place a value 'modified' in the 'status' column of your table1
For more info look here
You can use merge statement if I understand your requirement correctly.
Documentation
As I do not have your table structure below is based on assumption, see whether this cater your requirement. I am assuming that e_id is primary key or change as per your table design.
MERGE INTO dates_temp_table trgt
USING (SELECT * FROM master_dates_table WHERE e_id=100) src
ON (trgt.prm_key = src.prm_key)
WHEN NOT MATCHED
THEN
INSERT (trgt.col, trgt.col2, trgt.status)
VALUES (src.col, src.col2, 'Modified');
More information and examples here
insert into tablename( column1, column2, column3,column4 ) SELECT column1,
column2, column3,column4 from anothertablename where tablename.ID=anothertablename.ID
IF multiple values are there then it will return the last result..If not you have narrow your search..

Difference between Select Into and Insert Into from old table?

What is difference between these in terms of constraints *keys* etc.
Select Into Statement
SELECT column1, column2, someInt, someVarChar
INTO ItemBack1
FROM table2
WHERE table2.ID = 7
Insert Into Statement
INSERT INTO table1 ( column1, column2, someInt, someVarChar )
SELECT table2.column1, table2.column2,
FROM table2
WHERE table2.ID = 7
and also
Create table ramm as select * from rammayan
Edit 1:
Database SQL Server 2008
I'm going to assume MySQL here.
The first two are identical, as the documentation states.
The third statement allows for both table creation and population, though your syntax is wrong up there; look at the right syntax for more info.
Update
It's SQL Server =p
SELECT column1, column2, someInt, someVarChar
INTO ItemBack1
FROM table2
WHERE table2.ID = 7
The first statement will automatically create the ItemBack1 table, based on table2.
INSERT INTO table1 ( column1, column2, someInt, someVarChar )
SELECT table2.column1, table2.column2,
FROM table2
WHERE table2.ID = 7
The second second statement requires that table1 already exists.
See also: http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/
If there's any difference in constraints, it would be because the second statement depends on what you have already created (and if the table is populated, etc.).
Btw, the third statement is Oracle(tm) and is the same as the first statement.
There are some very important differences between SELECT INTO and INSERT.
First, for the INSERT you need to pre-define the destination table. SELECT INTO creates the table as part of the statement.
Second, as a result of the first condition, you can get type conversion errors on the load into the table using INSERT. This cannot happen with a SELECT INTO (although the underlying query could produce an error).
Third, with a SELECT INTO you need to give all your columns names. With an INSERT, you do not need to give them names.
Fourth, SELECT INTO locks some of the metadata during the processing. This means that other queries on the database may be locked out of accessing tables. For instance, you cannot run two SELECT INTO statements at the same time on the same database, because of this locking.
Fifth, on a very large insert, you can sometimes see progress with INSERT but not with SELECT INTO. At least, this is my experience.
When I have a complicated query and I want to put the data into a table, I often use:
SELECT top 0 *
INTO <table>
FROM <query>
INSERT INTO <table>
SELECT * FROM <query>
Select Into ->Creates the table on the fly upon select execution
while
Insert Into ->Presumes that the Table DB already exist
lastly
Create, simply creates the table from the return result of the query
I don't really understand your question. Let's try:
The 1st one selects the value of the columns "someVarChar" into a variable called "ItemBack1". Depending on your SQL-Server (mysql/oracle/mssql/etc.) you can now do some logic with this var.
The 2nd one inserts the result of
SELECT table2.column1, table2.column2, 8, 'some string etc.'
FROM table2
WHERE table2.ID = 7
into the table1 (Copy)
And the 3rd creates a new table "ramm" as a copy of the table "rammayan"
Generally speaking
Each one has its own particularities, one creates a temporary table, other uses a previously existing table and the third one creates a new table with exact same estructure and formatting
SELECT…INTO creates a new table in the default filegroup and inserts the resulting rows from the query into it
INSERT INTO: fills an already existing table
INSERT...INTO
The third option is known as CTAS (Create Table As Select) do a search and you will get tons of usefull links. Basically it creates a table, not a temporary one, with the structure and types used on the SELECT statement.
I wanted to add some more links but as I'm a new user I'm only allowed to post 2 links to prevent spam.
INSERT INTO SELECT inserts into an existing table.
SELECT INTO creates a new table and puts the data in it.
All of the columns in the query must be named so each of the columns in the table will have a name. This is the most common mistake I see for this command.
The data type and nullability come from the source query.
If one of the source columns is an identity column and meets certain conditions (no JOINs in the query for example) then the column in the new table will also be an identity.
INSERT INTO SELECT
CREATE TABLE ExistingTableName1 (ColumnName VARCHAR(255));
GO
INSERT
INTO ExistingTableName1
SELECT ColumnaName
FROM ExistingTableName2;
GO
SELECT INTO INSERT
SELECT ColumnName INTO NewTableName
FROM ExistingTableName1;
GO
The SQL SELECT INTO Statement
The SELECT INTO statement copies data from one table into a new table.
SELECT INTO Syntax
SELECT column1, column2, column3, ...
INTO newtable [IN externaldb]
FROM oldtable
WHERE condition;
The new table will be created with the column-names and types as defined in the old table. You can create new column names using the AS clause.
The SQL INSERT INTO SELECT Statement
The INSERT INTO SELECT statement copies data from one table and inserts it into another table.
INSERT INTO SELECT Syntax
INSERT INTO table2 (column1, column2, column3, ...)
SELECT column1, column2, column3, ...
FROM table1
WHERE condition;
INSERT INTO SELECT requires that data types in source and target tables match
The existing records in the target table are unaffected