I have a spring boot application that connects to Cockroachdb. I have the following script in my flyway using which the table gets created:
CREATE TABLE IF NOT EXISTS sample_table (
name varchar,
groups varchar,
PRIMARY KEY (name));
The application starts fine, but whenever there is a value for the 'groups' column that is greater than 255 length, I get an error :
Caused by: org.postgresql.util.PSQLException: ERROR: value too long for type VARCHAR(255)
In the sql script, I have mentioned the column 'groups' as 'varchar' which should not restrict the length so I am not sure why am I getting this error.
There isn't an implicit default limit on varchar in CockroachDB. This error indicates that the groups column was initialized with the type varchar(255) when the table was created. Running SHOW CREATE TABLE sample_table; should confirm this.
It's possible that something unexpected is going on in the flyway and the table is not being created how you want it to be created.
Related
In short, i have this clause:
CREATE TABLE IF NOT EXISTS `journalData`
(
`JD_Key` INTEGER PRIMARY KEY AUTOINCREMENT,
`JD_Event_Key` INTEGER,
`JD_VarCount` INTEGER,
`JD_VarTypes` TEXT,
`JD_VarValues` TEXT,
`JD_EventDate` TEXT
);
INSERT INTO `journalData`
VALUES (24, 0, '', '', '04.02.2023 20:26:18');
And following SQLite tutorials on AUTOINCREMENT (https://www.sqlitetutorial.net/sqlite-autoincrement/), it says the following:
Second, insert another row without specifying a value for the person_id column:
INSERT INTO people (first_name,last_name)
VALUES('William','Gate');
Implying, that you can add rows to the table, without having to specify primary key of a table, but I get this error:
Uncaught Error: table journalData has 6 columns but 5 values were supplied
What am I doing here wrong? I've tried to add single row with the key before mentioned insert. But I keep getting this error
I found a solution, it seems that error was somewhat misleading, i have to specify columns names.
though, if i specify all of the values in table it is not necessary to specify columns names, so because of that, during testing i assumed that syntax here is similar to MySQL, where from experience i remembered, that you don't have to specify names in this exact case.
so the following query worked for me.
INSERT INTO `journalData`
(JD_Event_Key, JD_VarCount, JD_VarTypes, JD_VarValues, JD_Event_Date)
VALUES (24,0,'','','04.02.2023 20:26:18');
I'm trying to change a column in Redshift from varchar to integer. I've already checked and the strings are all numbers so it should force fine.
When I run:
alter table schema.table_name alter column "id" type int;
I get the following error:
ERROR: target data type "int8" is not supported [SQL State=0A000]
I've checked the Redshift documentation and just to rule out a few potentials:
The field is not a primary or foreign key
There's no compression encodings on it
There's no default values
The code is not in a transaction block
Any pointers would be amazing, thank you!
Alter column type is for varchar types - "ALTER COLUMN column_name TYPE new_data_type --
A clause that changes the size of a column defined as a VARCHAR data type." See: https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE.html
I want to create a table, which contains a nullable column having GENERATED BY DEFAULT AS IDENTITY option, therefore I run the following query:
CREATE TABLE my_table (
generated INTEGER NULL GENERATED BY DEFAULT AS IDENTITY,
data TEXT NOT NULL
);
But once I try to insert a row in the table, which generated field is null like this:
INSERT INTO my_table(generated, data) VALUES(NULL, "some data");
I get a null-constraint violation error.
However if I change the order of my_table.generated column properties:
CREATE TABLE my_table (
generated INTEGER GENERATED BY DEFAULT AS IDENTITY NULL,
data TEXT NOT NULL
);
It inserts rows, which generated field is NULL, without any issues.
Is this the expected behavior for the case?
Postgres developers told me this is a bug since identity columns weren't supposed to be nullable (see the patch file under the response).
My shema looks like this:
CREATE TABLE newsletter_status(
identificationnumber BIGINT GENERATED BY DEFAULT AS IDENTITY(START WITH 0) NOT NULL PRIMARY KEY,
bpid varchar(10),
consumer varchar(4) NOT NULL,
source varchar(10),
vkorg varchar(4) NOT NULL,
cryptid varchar(255) NOT NULL,
status varchar(25),
regDat timestamp,
confirmDat timestamp,
updateDat timestamp
);
CREATE TABLE scpnewsletter (version varchar(255));
CREATE INDEX bpid_index ON newsletter_status (bpid);
CREATE INDEX cryptid_index ON newsletter_status (cryptid);
Running locally in a H2-Database this works fine with insertions. Inserting a object which fields consumer,source,vkorg,cryptid and status ARE set, others dont. The Database should generate an identificationnumber. And the H2 does
When run on the customer DEV environment with a HANA DB the insertion fails, saying that:
PreparedStatementCallback; uncategorized SQLException for SQL [INSERT INTO newsletter_status (identificationnumber,bpid,consumer,source,vkorg,cryptid,status,regDat,confirmDat,updateDat) VALUES (?,?,?,?,?,?,?,?,?,?)]; SQL state [HY000]; error code [287]; SAP DBTech JDBC: [287]: cannot insert NULL or update to NULL: Not nullable "IDENTIFICATIONNUMBER" column; nested exception is com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [287]: cannot insert NULL or update to NULL: Not nullable "IDENTIFICATIONNUMBER" column
It does not likes that the Identificationnumber is null.
Going further. If I add an Identificationnumber it says the same thing about bpid:
PreparedStatementCallback; uncategorized SQLException for SQL [INSERT INTO newsletter_status (identificationnumber,bpid,consumer,source,vkorg,cryptid,status,regDat,confirmDat,updateDat) VALUES (?,?,?,?,?,?,?,?,?,?)]; SQL state [HY000]; error code [287]; SAP DBTech JDBC: [287]: cannot insert NULL or update to NULL: Not nullable "BPID" column; nested exception is com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [287]: cannot insert NULL or update to NULL: Not nullable "BPID" column
bpid can clearly be null in the shema. This ambiguity in consuming field confuses.
If both bpid and Identificationnumber are set, then the is not Problem with the database.
I want to store an object that both of this ids can be null, but also want an unique id for Identificationnumber.
I cant debug on the DEV env from the customer. Any Idea what possibly could go wrong here?
Ok, so your IDENTIFICATIONNUMBER can never ever be NULL.
Based on the DDL provided, it has a NOT NULL constraint and is the single-column primary key, which implicitly makes it NOT NULL.
If H2 allows NULL inserts, that's an H2 bug/incompliant behavior.
Concerning the BPID column it looks like you're still trying to insert the value for IDENTIFICATIONNUMBER even though this value is defined as IDENTITY column. I assume, that by specifying NULL as a value for it, you want to make HANA use the DEFAULT value (the sequence).
If that's correct, then the answer is: it does not work this way.
Also: the error message wrongly named BPID as the problematic field.
The correct way to use DEFAULT values in INSERT statements in HANA is to leave the columns for which the DEFAULT values should be used, out from the column list.
SQL standard also has the option to use the DEFAULT keyword, but that's (currently, HANA 2 SPS 04) not supported.
Recently i have read about UDT. i have created a type but i have a problem with that one. please look into the following
---drop type ssn
CREATE TYPE ssn
FROM VARCHAR(11) NOT NULL;
DECLARE #er ssn;
IF Object_id('TEMPDB.DBO.#ter', 'U') IS NOT NULL
DROP TABLE #ter;
CREATE TABLE #ter (
PERIOD_SID INT
,PERIOD_QUAR VARCHAR(10) PRIMARY KEY (PERIOD_SID)
)
INSERT INTO #ter (
PERIOD_SID
,PERIOD_QUAR
)
SELECT *
FROM (
VALUES (
(1)
,(#er)
)
) V(p, q)
I have create a type ssn with varchar(11) not null, and ran the above one logic, it execute successfully
As per my assumption it should throw an error.
I need to know why the above logic run successfully.
EDIT
as per suggestion i have added this udt as a column in AQL server , since in oracle we can create a column with collections similar to UDT
IF Object_id('TEMPDB.DBO.#ter1', 'U') IS NOT NULL
DROP TABLE #ter1;
CREATE TABLE #ter1 (
PERIOD_SID INT
,PERIOD_QUAR ssn PRIMARY KEY (PERIOD_SID)
)
An error was encounter while creating the table saying there was no such datatype "ssn"
Thanks in advance
Here is the reason
The null_type parameter only defines the default nullability for this
data type. If nullability is explicitly defined when the alias data
type is used during table creation, it takes precedence over the
defined nullability.
This is taken from sp_addtype but it should be the same case with CREATE TYPE. sp_addtype is also used to create user defined data type
In your case, we cannot create a variable which will not accept NOT NULL values. So I think the property is overridden