Oracle Create Table -> Missing Right Parenthesis - sql

I am new to writing SQL and using Oracle... so I'm sorry if this is obvious but I can't figure it out. It's telling me that I'm missing a right parenthesis but as far as I can tell they are all there. It seems to be a problem with the VARBINARY line but I don't know why.
CREATE TABLE DATA_VALUE
(
DATA_ID VARCHAR2(40) NOT NULL,
POSITION INT NOT NULL,
VALUE VARCHAR2(50),
BINARY_VALUE VARBINARY(50),
DATA_TYPE VARCHAR2(20),
CONSTRAINT DATA_VALUE_PK PRIMARY KEY(DATA_ID, POSITION)
);

VARBINARY is not an Oracle data type. A quick search suggests MySQL and SQL Server have it, at least, but not Oracle. Perhaps you need to explain what you want to store in that field. The closest I can think you might mean is RAW.
The valid built-in datatypes are listed in the documentation:
The RAW and LONG RAW data types store data that is not to be
explicitly converted by Oracle Database when moving data between
different systems. These data types are intended for binary data or
byte strings.
This Microsoft article suggests you should be using RAW as a replacement for VARBINARY too, at least for the size you're talking about.
CREATE TABLE DATA_VALUE
(
DATA_ID VARCHAR2(40) NOT NULL,
POSITION INT NOT NULL,
VALUE VARCHAR2(50),
BINARY_VALUE RAW(50),
DATA_TYPE VARCHAR2(20),
CONSTRAINT DATA_VALUE_PK PRIMARY KEY(DATA_ID, POSITION)
);
table DATA_VALUE created.

Related

How to make the Primary Key have X digits in PostgreSQL?

I am fairly new to SQL but have been working hard to learn. I am currently stuck on an issue with setting a primary key to have 8 digits no matter what.
I tried using INT(8) but that didn't work. Also AUTO_INCREMENT doesn't work in PostgreSQL but I saw there were a couple of data types that auto increment but I still have the issue of the keys not being long enough.
Basically I want to have numbers represent User IDs, starting at 10000000 and moving up. 00000001 and up would work too, it doesn't matter to me.
I saw an answer that was close to this, but it didn't apply to PostgreSQL unfortunately.
Hopefully my question makes sense, if not I'll try to clarify.
My code (which I am using from a website to try and make my own forum for a practice project) is:
CREATE Table users (
user_id INT(8) NOT NULL AUTO_INCREMENT,
user_name VARCHAR(30) NOT NULL,
user_pass VARCHAR(255) NOT NULL,
user_email VARCHAR(255) NOT NULL,
user_date DATETIME NOT NULL,
user_level INT(8) NOT NULL,
UNIQUE INDEX user_name_unique (user_name),
PRIMARY KEY (user_id)
) TYPE=INNODB;
It doesn't work in PostgreSQL (9.4 Windows x64 version). What do I do?
You are mixing two aspects:
the data type allowing certain values for your PK column
the format you chose for display
AUTO_INCREMENT is a non-standard concept of MySQL, SQL Server uses IDENTITY(1,1), etc.
Use a serial column in Postgres:
CREATE TABLE users (
user_id serial PRIMARY KEY
, ...
)
That's a pseudo-type implemented as integer data type with a column default drawing from an attached SEQUENCE. integer is easily big enough for your case (-2147483648 to +2147483647).
If you really need to enforce numbers with a maximum of 8 decimal digits, add a CHECK constraint:
CONSTRAINT id_max_8_digits CHECK (user_id BETWEEN 0 AND < 99999999)
To display the number in any fashion you desire - 0-padded to 8 digits, for your case, use to_char():
SELECT to_char(user_id, '00000000') AS user_id_8digit
FROM users;
That's very fast. Note that the output is text now, not integer.
SQL Fiddle.
A couple of other things are MySQL-specific in your code:
int(8): use int.
datetime: use timestamp.
TYPE=INNODB: just drop that.
You could make user_id a serial type column and set the seed of this sequence to 10000000.
Why?
int(8) in mysql doesn't actually only store 8 digits, it only displays 8 digits
Postgres supports check constraints. You could use something like this:
create table foo (
bar_id int primary key check ( 9999999 < bar_id and bar_id < 100000000 )
);
If this is for numbering important documents like invoices that shouldn't have gaps, then you shouldn't be using sequences / auto_increment

Create VARCHAR FOR BIT DATA column

I am trying to create a SQL table in Netbeans 8.0 with one of its columns meant to store a byte[] (so VARBINARY is the type I am looking for). The wizard for the creation of a new table offers me the option of VARCHAR FOR BIT DATA, which should work, but it raises a syntax error when creating the table:
create table "BANK".Accounts
(
id NUMERIC not null,
pin VARCHAR FOR BIT DATA not null,
primary key(id)
)
The error is due to the presence of the word FOR, so I manually change the statement so that it is
create table "BANK".Accounts
(
id NUMERIC not null,
pin "VARCHAR FOR BIT DATA" not null,
primary key(id)
)
but now the problem is that the type does not exist. Any ideas?
Thank you.
Here's the manual page for VARCHAR FOR BIT DATA: http://db.apache.org/derby/docs/10.10/ref/rrefsqlj32714.html
Note the section that says:
Unlike the case for the CHAR FOR BIT DATA type, there is no default length for a VARCHAR FOR BIT DATA type. The maximum size of the length value is 32,672 bytes.
So the problem is that you haven't specified a length.
If your byte array is, say, 256 bytes long, you could specify
pin VARCHAR (256) FOR BIT DATA NOT NULL,
You might also consider using BLOB if that fits your requirements. You can see all the Derby data types here: http://db.apache.org/derby/docs/10.10/ref/crefsqlj31068.html

Boolean giving invalid datatype - Oracle

I am trying to create a table in Oracle SQL Developer but I am getting error ORA-00902.
Here is my schema for the table creation
CREATE TABLE APPOINTMENT(
Appointment NUMBER(8) NOT NULL,
PatientID NUMBER(8) NOT NULL,
DateOfVisit DATE NOT NULL,
PhysioName VARCHAR2(50) NOT NULL,
MassageOffered BOOLEAN NOT NULL, <-- the line giving the error -->
CONSTRAINT APPOINTMENT_PK PRIMARY KEY (Appointment)
);
What am I doing wrong?
Thanks in advance
Last I heard there were no boolean type in oracle. Use number(1) instead!
Oracle does not support the boolean data type at schema level, though it is supported in PL/SQL blocks. By schema level, I mean you cannot create table columns with type as boolean, nor nested table types of records with one of the columns as boolean. You have that freedom in PL/SQL though, where you can create a record type collection with a boolean column.
As a workaround I would suggest use CHAR(1 byte) type, as it will take just one byte to store your value, as opposed to two bytes for NUMBER format. Read more about data types and sizes here on Oracle Docs.
Oracle doesn't support boolean for table column datatype. You should probably use a CHAR(1) (Y/N)
You can see more info on this other answer
i think u got a below good result
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/datatypes.htm#CJACJGBG
When using an Entity class to create the schema, defining the boolean value as below will help
#Column(columnDefinition = "number default 0")
private boolean picked;

Selecting from a table and inserting into another table's column of a different type using query in ms access

I have some txt files that contain tables with a mix of different records on them which have diferent types of values and definitons for columns. I was thinking of importing it into a table and running a query to separate the different record types since a identifier to this is listed in the first column. Is there a way to change the value type of a column in a query? since it will be a pain to treat all of them as text. If you have any other suggestions on how to solve this please let me know as well.
Here is an example of tables for 2 record types provided by the website where I got the data from
create table dbo.PUBACC_A2
(
Record_Type char(2) null,
unique_system_identifier numeric(9,0) not null,
ULS_File_Number char(14) null,
EBF_Number varchar(30) null,
spectrum_manager_leasing char(1) null,
defacto_transfer_leasing char(1) null,
new_spectrum_leasing char(1) null,
spectrum_subleasing char(1) null,
xfer_control_lessee char(1) null,
revision_spectrum_lease char(1) null,
assignment_spectrum_lease char(1) null,
pfr_status char(1) null
)
go
create table dbo.PUBACC_AC
(
record_type char(2) null,
unique_system_identifier numeric(9,0) not null,
uls_file_number char(14) null,
ebf_number varchar(30) null,
call_sign char(10) null,
aircraft_count int null,
type_of_carrier char(1) null,
portable_indicator char(1) null,
fleet_indicator char(1) null,
n_number char(10) null
)
Yes, you can do what you want. In ms access you can use any VBA functions and with some
IIF(FirstColumn="value1", CDate(SecondColumn), NULL) as DateValue,
IIF(FirstColumn="value2", CDec(SecondColumn), NULL) as DecimalValue,
IIF(FirstColumn="value3", CStr(SecondColumn), NULL) as StringValue
You can use all/any of the above in your SELECT.
EDIT:
From your comments it seems that you want to split them into different tables - importing as text should not be a problem in that case.
a)
After you import and get it in the initial table, create the proper table manually setting you can INSERT into the proper table.
b)
You could even do a make table query, but it might be faster to create it manually. If you do a make table query you have to be sure that you have casted the data into proper type in your select.
EDIT2:
As you updated the question showing the structure it becomes obvious that my suggestion above will not help directly.
If this is one time process you can follow HLGEM's solution. Here are some more details.
1) Import into a table with two columns - RecordType char(2), Rest memo
2) Now you can split the data (make two queries that select based on RecordType) and re-export the data (to be able to use access' import wizard)
3) Now you have two text files with proper structure which can be easily imported
I did this in my last job. You start with a staging table that has one column or two coulmns if your identifier is always the same length.
Then using the record identifier, you move the data to another set of staging tables, one for each type of record you have. This will be in columns for the data and can have the correct data types. Then you do any data cleaning you need to do. Then you insert into the real production table.
If you have a column defined as text, because it has both alphas and numbers, you'll only be able to query it as if it were text. Once you've separated out the different "types" of data into their own tables, you should be able to change the schema definition. Please comment here if I'm misunderstanding what you're trying to do.

What really happens when I use varchar(10) in the sqlite command-line shell?

I'm messing around with SQLite for the first time by working through some of the SQLite documentation. In particular, I'm using Command Line Shell For SQLite and the SoupToNuts SQLite Tutorial on Sourceforge.
According to the SQLite datatype documentation, there are only 5 datatypes in SQLite. However, in the two tutorial documents above, I see where the authors use commands such as
create table tbl1(one varchar(10), two smallint);
create table t1 (t1key INTEGER PRIMARY KEY,data TEXT,num double,timeEnter DATE);
which contain datatypes that aren't listed by SQLite, yet these commands work just fine.
Additionally, when I ran .dump to see the SQL statements, these datatype specifications are preserved:
sqlite> CREATE TABLE Vulnerabilities (
...> VulnerabilityID unsigned smallint primary key,
...> VulnerabilityName varchar(10),
...> VulnerabilityDescription longtext);
sqlite> .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE Vulnerabilities (
VulnerabilityID unsigned smallint primary key,
VulnerabilityName varchar(10),
VulnerabilityDescription longtext);
COMMIT;
sqlite>
So, what gives? Does SQLite keep a reference for any datatype specified in the SQL yet converts it behind the scenes to one of its 5 datatypes? Or is there something else I'm missing?
SQLite uses dynamic typing.
SQLite will allow you to insert an integer into that VARCHAR(10) column.
SQLite will not complain if insert a string longer than 10 characters into that column.
As el.pescado mentions, SQLite has storage classes AKA "affinities".
If you attempt to insert a column belongs to a particular affinity, then SQLite will try to convert that value to match the affinity.
If the conversion doesn't work, the value is inserted as-is.
So while your more granular datatypes are saved (apparently) to the metadata table, they are not being used by SQLite.
There are not five datatypes, rather 5 datatype "classes" that "real" datatypes fall into. So that, TINYINT, SMALLINT and BIGINT are three different datatypes, but all belonging to the INTEGER storage class.