SQL Error: ORA-02017: integer value required - sql

I am trying to create a table in Oracle 11g. This is a backup table of already existing table which has NVARCHAR2(382.5) column in that.
But, when I am trying to create another backup table using create command, I am getting this error -
SQL Error: ORA-02017: integer value required
02017. 00000 - "integer value required"
*Cause:
*Action:
This is my create statement,
CREATE TABLE "MYSCHEMA"."BACKUPTABLE"
(
INPUT_FILE_NAME NVARCHAR2(382.5)
);
Why is that table was already created with that datatype and now its not allowing?

There is something else at play here. An NVARCHAR column requires an integer parameter. You cannot have a fraction of a character.

If you want to create backup table then you can use follows:
create table <name_for_backup_table> as select * from <raw_table>
So that all the columns of the table to get the correct types. And not have to write another query to copy raw data.

I run into the same thing, believe it has something to do with 32 bits and 64 bits client.
Just installed 18C 32 bits Oracle client on my Windows 10:
1) If connect using sqlplus and run a desc, the column showed as NVARCHAR2(255);
Name Null? Type
----------------------------------------- -------- ----------------------------
TITLE NVARCHAR2(255)
2) If connect using "SQL Developer" and run desc there, the column showed NVARCHAR2(382.5)
Name Null? Type
----------------------------------------- -------- ----------------------------
TITLE NVARCHAR2(382.5)
Might want to check with Oracle but it is not a real issue, so ...

Related

How do I change column data type in Redshift?

I have tried changing the column data type in Redshift via SQL. Keep getting an error:
[Amazon][Amazon Redshift] (30) Error occurred while trying to execute a query: [SQLState 42601] ERROR: syntax error at or near "TABLE" LINE 17: ALTER TABLE bmd_disruption_fv ^
Unable to connect to the Amazon Redshift server 'eceim.master.datamart.eceim.sin.auto.prod.c0.sq.com.sg'. Check that the server is running and that you have access privileges to the requested database
The first sql query works. I have tried writing the Alter Table script before the Select lines but it did not work too.
`
*Extract selected columns and renaming them for easier reference
*/
select ID, Completion_Time AS Date_Reported, Name2 AS Name, Contact_Info_for_updates AS Contact_Info,
Your_operation_line AS Operation_Line, Aircraft_Registration_SMU_SMT_etc AS Aircraft_Reg,
Designation_trade_B1_B2_ACT_AST_AAT AS Trade, Choose_your_Issue AS Issue, Manpower, Material, Equipment_GES,
Information, Tools, State_details_here_SVO_number_too AS Issue_Details, Time_wasted_on_due_to_issue AS Time_Wasted,
State_additional_comments_suggestions AS Additional_Comments, Stakeholders, Status
from bmdm.bmd_disruption_fv
/*Change colum data type
*/
ALTER TABLE bmd_disruption_fv
{
ALTER COLUMN ID TYPE INT
}
`
Several things are causing issues here. First the curly brackets '{}' should not be in the alter table statement. Like this:
alter table event alter column eventname type varchar(300);
Second, and likely more importantly, you can only change the length of varchar columns. So changing a column type to INT is not possible. You will need to perform a multistep process to make this change to the table.

How to alter datatype of a column in BigQuery

I'm trying to change the datatype for a column in my bigquery table from INT64 to STRING with the condition it's not NULL.
When I type:
ALTER TABLE table_name ALTER COLUMN id STRING NOT NULL
I get an error
Syntax error: Expected keyword DROP or keyword SET but got identifier "STRING"
How should I resolve this?
It is unsupported to change a column's data type at the moment.
Take a look at the official documentation. It explains 2 ways to manually change a column's data type. For the record:
Using a SQL query: choose this option if you are more concerned about simplicity and ease of use, and you are less concerned about costs.
Recreating the table: choose this option if you are more concerned about costs, and you are less concerned about simplicity and ease of use.
Despite the fact that you have got the error due to not using SET:
ALTER TABLE table_name
ALTER COLUMN id SET DATA TYPE STRING
but anyway, unfortunately, it's not possible to alter from INT64 to STRING directly.
What you can do is create a new table using
CAST(id AS STRING) id

DB2/400 - Auto generated timestamp on change (error)

I'm trying to create a table with a timestamp column that autogenerates with 'current timestamp' on each update of the record. I'm on DB2/400 (version V5R3) using ODBC driver.
That's the query:
CREATE TABLE random_table_name (
ID integer not null generated always as identity,
USERS_ID varchar (30),
DETAILS varchar (1000),
TMSTML_CREATE timestamp default current timestamp ,
TMSTMP_UPDATE timestamp not null generated always for each row on update as row change timestamp,
PRIMARY KEY ( ID )
)
I get this error (translated):
ERROR [42000] [IBM][iSeries Access ODBC Driver][DB2 UDB]SQL0104 - Token EACH not valid. Valid tokens: BIT SBCS MIXED.
Without the 'TMSTMP_UPDATE' row the query works. How can i solve this?
EDIT: Ok, i understand that in my DB2 version, the only way is to use triggers, but today AS400 seems to be evil with me.
I'm trying with this:
CREATE TRIGGER random_trigger_name
AFTER UPDATE ON random_table_name
REFERENCING NEW AS NEW_ROW
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
SET NEW_ROW.TMSTM_UPDATE = CURRENT TIMESTAMP;
END
Error (translated):
ERROR [42000] [IBM][iSeries Access ODBC Driver][DB2 UDB]SQL0312 - Variable TMSTM_UPDATE not defined or not available.
The column TMSTM_UPDATE exist and it's a normal timestamp.
EDIT 2: I've solved the trigger problem by replacing 'after' with 'before'. Now everything works as expected. Thank you all!
There is a standard way to do it in iSeries DB2. It is documented here: IBM Knowledge center - Creating a row change timestamp column
You should change your table definition to:
TMSTMP_UPDATE TIMESTAMP NOT NULL FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP
I am using it in tables in production over V7R2 and it works like a charm :) Hope it will be available for V5R3
EDIT
As Charles mentioned below unfortunately this feature is available since DB2 for i V6R1

Why does Oracle 12c query require double quotes around table [duplicate]

This question already has an answer here:
ORA-00942: table or view does not exist - Oracle
(1 answer)
Closed 7 years ago.
The database I'm querying is Oracle 12c. Detailed info about database version is as follows:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
I'm trying to eliminate the need to have double quotes around every view or table in my SQL query.
Following works (from Oracle Sql Developer GUI)
select m."Metadata"
from "EvMetadata" m
Following gives error (from Oracle Sql Developer GUI)
select m.Metadata
from EvMetadata m
Error is
ORA-00942: table or view does not exist
00942. 00000 - "table or view does not exist"
*Cause:
*Action: Error at Line: 2 Column: 6
I generated DDL, which looks like this
CREATE TABLE "EVP"."EvMetadata"
("EvMetadataId" NUMBER(10,0) GENERATED ALWAYS AS IDENTITY MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE ,
"InsertDate" TIMESTAMP (6),
"SessionId" NVARCHAR2(17),
"FileCheckSum" NVARCHAR2(32),
"Metadata" NCLOB,
"Device" NVARCHAR2(20),
"User" NVARCHAR2(20)
) SEGMENT CREATION IMMEDIATE
So based on #toddlermenot's comment below, it is very possible that this is how the table was created - with double quotes. I used ORM Entity Framework Code First to generate the schema for me so it seems like the ORM puts the double quotes by default.
Maybe you created the table with double quotes?
Using double quotes would preserve the case and since the table name has both upper and lower case letters in your example, Oracle is able to find it only when you use the double quotes.
Without the double quotes, Oracle probably uses a single case (upper?) irrespective of any case you might have in the table, by default.
For example:
if you create the table using
create table "TaBlE_NaMe" (blah..)
then you must use the double quotes in your SELECT.
If you create the table using
create table TaBlE_NaMe (blah..)
The SELECT without quote should work correctly. (It would work with the quote also if you had all the letters of the table's name in upper case)
Names in oracle be it table, column, object, view, package, procedure, function, etc. are by default UPPER CASE unless quoted with double quotes. Furthermore, all name resolution in oracle is case sensitive.
What this means is that when you create or attempt to use a database object without quoting the name oracle will implicitly convert that name to upper case before creating the object or resolving the name. So the unquoted EvMetadata table name is equivalent to the quoted upercase "EVMETADATA" table name but not to the quoted mixed case "EvMetadata" table name.

CREATE TYPE "XYZ" AS TABLE OF VARCHAR2(104) in postgresql

I have converted the types from oracle to postgres using the
ORa2pg
tool
CREATE TYPE "XYZ" AS TABLE OF VARCHAR2(104)
The same is working fine in oracle . but the ora2pg converts the same as it is in postgres
with the warning as
-- Unsupported, please edit to match PostgreSQL syntax
CREATE TYPE "XYZ" AS TABLE OF VARCHAR(104)
Not Working in the postgres
I have tried the other queries as
create type XYZ as (xyz varchar[])
But the same is not usefull as it is not giving the desired result The Type does not match
Function is receiving the type Table of varchar but the
create type XYZ as (xyz varchar[])
is of composite type in postgres
Is there any way to have a type which is same as the above .. Pls Help