Create VARCHAR FOR BIT DATA column - sql

I am trying to create a SQL table in Netbeans 8.0 with one of its columns meant to store a byte[] (so VARBINARY is the type I am looking for). The wizard for the creation of a new table offers me the option of VARCHAR FOR BIT DATA, which should work, but it raises a syntax error when creating the table:
create table "BANK".Accounts
(
id NUMERIC not null,
pin VARCHAR FOR BIT DATA not null,
primary key(id)
)
The error is due to the presence of the word FOR, so I manually change the statement so that it is
create table "BANK".Accounts
(
id NUMERIC not null,
pin "VARCHAR FOR BIT DATA" not null,
primary key(id)
)
but now the problem is that the type does not exist. Any ideas?
Thank you.

Here's the manual page for VARCHAR FOR BIT DATA: http://db.apache.org/derby/docs/10.10/ref/rrefsqlj32714.html
Note the section that says:
Unlike the case for the CHAR FOR BIT DATA type, there is no default length for a VARCHAR FOR BIT DATA type. The maximum size of the length value is 32,672 bytes.
So the problem is that you haven't specified a length.
If your byte array is, say, 256 bytes long, you could specify
pin VARCHAR (256) FOR BIT DATA NOT NULL,
You might also consider using BLOB if that fits your requirements. You can see all the Derby data types here: http://db.apache.org/derby/docs/10.10/ref/crefsqlj31068.html

Related

Why does Diesel fail to migrate a PostgresSQL database when the columns specify a length? [duplicate]

I am experimenting with PostgreSQL coming from SQL using MySQL and I simply wish to create a table with this piece of code which is valid SQL:
CREATE TABLE flat_10
(
pk_flat_id INT(30) DEFAULT 1,
rooms INT(10) UNSIGNED NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (flat_id)
);
I get the error
ERROR: syntax error at or near "("
LINE 3: pk_flat_id integer(30) DEFAULT 1,
I have conducted searches on the web and found no answer and I cant seem to find an answer in the PostgreSQL manual. What am I doing wrong?
I explicitly want to set a limit to the number of digits that can be inserted into the "pk_flat_id" field
I explicitly want to set a limit to the number of digits that can be inserted into the "pk_flat_id" field
Your current table definition does not impose a "size limit" in any way. In MySQL the parameter for the intdata type is only a hint for applications on the display width of the column when displaying it.
You can store the value 2147483647 in an int(1) without any problems.
If you want to limit the values to be stored in an integer column you can use a check constraint:
CREATE TABLE flat_10
(
pk_flat_id bigint DEFAULT 1,
rooms integer NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (flat_id),
constraint valid_number
check (pk_flat_id <= 999999999)
);
The answer is that you use numeric or decimal types. These are documented here.
Note that these types can take an optional precision argument, but you don't want that. So:
CREATE TABLE flat_10
(
pk_flat_id DECIMAL(30) DEFAULT 1,
rooms DECIMAL(10) NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (pk_flat_id)
);
Here is a SQL Fiddle.
I don't think that Postgres supports unsigned decimals. And, it seems like you really want serial types for your keys and the long number of digits is superfluous.
Changing integer to numeric works.
CREATE TABLE flat_10
(
pk_flat_id bigint DEFAULT 1,
rooms numeric NOT NULL,
room_label CHAR(1) NOT NULL,
);

unable to insert rows in a new table in sql, getting error as string or binary data would be truncated

Hi I have the below table created
create table person
(
sno int primary key identity(1,1) not null,
firstname nvarchar,
lastname nvarchar,
city nvarchar,
zip int
);
I have written the insert statement as
insert into person
(firstname,lastname,city,zip)
values
('firstname','lastname','city','123456')
but i am getting the following error when i try to insert values into the table
string or binary data would be truncated
The statement has been terminated
Please help to overcome my problem , i am not sure where i did i went wrong.
Please help,
Thanks In Advance.
in your script there are 2 issues:
You did not specify the length of the varchar columns.
Take a look here.
Do not define columns, variables and parameters using VARCHAR, and
NVARCHAR data types without specifying length attribute. This will not
produce a dynamic length string data, but will make SQL Server choose
default length of 1 (NOTE: In some scenarios it the length can be 30).
Replace it with:
create table person
(
sno int primary key identity(1,1) not null,
firstname nvarchar(50),
lastname nvarchar(50),
city nvarchar(50),
zip int
);
2..You are attempting to insert a varchar zip rather as int (which is the columns type).
Replace the insert with:
insert into person
(firstname,lastname,city,zip)
values
('firstname','lastname','city',123456)
Whenever you see this error
string or binary data would be truncated its a column length` issue.
Check the length of the column in the sql field definition
You are trying to insert data of more length than the field length
So either you limit the data to the length in database or increase the column length.
You are inserting a string for an int (zip), try this:
insert into person
(firstname,lastname,city,zip)
values
('firstname','lastname','city',123456)
EDIT:
you should also declare your nvarchar length as it only creates it as a nvarchar(1) if you don't , so change all of your nvarchar's to nvarchar(50) or whatever length you want for max
Per TSQL Technet Article on NVARCHAR:
nvarchar[(n)]: When n is not specified in a data definition or variable declaration statement, the default length is 1. When n is not specified with the CAST function, the default length is 30.

Oracle Create Table -> Missing Right Parenthesis

I am new to writing SQL and using Oracle... so I'm sorry if this is obvious but I can't figure it out. It's telling me that I'm missing a right parenthesis but as far as I can tell they are all there. It seems to be a problem with the VARBINARY line but I don't know why.
CREATE TABLE DATA_VALUE
(
DATA_ID VARCHAR2(40) NOT NULL,
POSITION INT NOT NULL,
VALUE VARCHAR2(50),
BINARY_VALUE VARBINARY(50),
DATA_TYPE VARCHAR2(20),
CONSTRAINT DATA_VALUE_PK PRIMARY KEY(DATA_ID, POSITION)
);
VARBINARY is not an Oracle data type. A quick search suggests MySQL and SQL Server have it, at least, but not Oracle. Perhaps you need to explain what you want to store in that field. The closest I can think you might mean is RAW.
The valid built-in datatypes are listed in the documentation:
The RAW and LONG RAW data types store data that is not to be
explicitly converted by Oracle Database when moving data between
different systems. These data types are intended for binary data or
byte strings.
This Microsoft article suggests you should be using RAW as a replacement for VARBINARY too, at least for the size you're talking about.
CREATE TABLE DATA_VALUE
(
DATA_ID VARCHAR2(40) NOT NULL,
POSITION INT NOT NULL,
VALUE VARCHAR2(50),
BINARY_VALUE RAW(50),
DATA_TYPE VARCHAR2(20),
CONSTRAINT DATA_VALUE_PK PRIMARY KEY(DATA_ID, POSITION)
);
table DATA_VALUE created.

Is VARCHAR or INT a validation rule?

When I set up a table, is declaring the 'name' attribute as varchar make this a validation rule or does this just define the data type?
CREATE TABLE `categories` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(200) NOT NULL,
`user_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
);
what I'm confused over is whether VARCHAR sets a data type, a validation rule or both of these things. So in a way VARCHAR is a validation rule because someone cannot input an INT?
It depends on the type of data you will be storing to those fields and that's define what data type you should use.
Example: name field will be storing character (string literal) and hance can't be INT type and has to be STRING type which in SQL is CHAR or VARCHAR or NVARCHAR
It defines only the data type. If wrong data type is passed, depending on server settings, an implisit conversion of the data is performed (most cases) or the query is rejected (example: MySQL's STRICT mode). Data validation is usually done in the application or in a trigger.

Is there a reason that the primary key is nchar while other fields are nvarchar?

I am working through an example from MSDN that uses a small database to demonstrate data driven testing, and here is the simple schema:
CREATE TABLE dbo.LogonInfoTest
(
UserId nchar(256) NOT NULL PRIMARY KEY CLUSTERED,
Password nvarchar(256) NULL,
IsValid bit NOT NULL
) ON [PRIMARY]
GO
My question is: What is the underlying reason for choosing nchar as the datatype of UserId and nvarchar as the datatype of Password?
There's no reason. The primary key should be NVARCHAR(256), as is hard for me to believe that the UserId will always be exactly 256 chars. Right now this schema is potentially wasting (a lot of) space on disk. Note that with SQL Server 2008 row-compression storage the fixed length column would be stored as a variable length one on disk anyway (trailing spaces are removed), but only if row-compression is enabled.