clickhouse table creation error in ubuntu - sql

I installed the clickhouse program in the ubuntu operating system and connected the SQLyog program. I can create a database, but I cannot create a table in it. It gives the following CODE:119 error.
ubuntu :) create table taxonomy_object_firewalls
CREATE TABLE taxonomy_object_firewalls
Query id: 37520dd5-44b9-436f-a5b6-96002f0a4ce7
0 rows in set. Elapsed: 0.001 sec.
Received exception from server (version 22.2.2):
Code: 119. DB::Exception: Received from localhost:9000. DB::Exception: Table engine is not specified in CREATE query. (ENGINE_REQUIRED)

Noting that Rich's syntax had a small unnecessary () after the ENGINE definition but was still successfully created in my testing. This syntax would be more accurate to the documentation:
CREATE DATABASE IF NOT EXISTS helloworld;
CREATE TABLE helloworld.my_first_table
(
user_id UInt32,
message String,
timestamp DateTime,
metric Float32
)
ENGINE = MergeTree
PRIMARY KEY (user_id, timestamp);

Check out the "Getting Started" guide in the docs. ClickHouse is unique in that every table has an Engine. Table engines determine where and how the data is stored. When in doubt, use MergeTree:
CREATE DATABASE IF NOT EXISTS helloworld;
CREATE TABLE helloworld.my_first_table
(
user_id UInt32,
message String,
timestamp DateTime,
metric Float32
)
ENGINE = MergeTree()
PRIMARY KEY (user_id, timestamp)
Also, make sure you read about the primary key and how it works in ClickHouse. Primary keys are not typically unique, but instead determine the sort order.

Related

Cockroachdb varchar column length isn't flexible

I have a spring boot application that connects to Cockroachdb. I have the following script in my flyway using which the table gets created:
CREATE TABLE IF NOT EXISTS sample_table (
name varchar,
groups varchar,
PRIMARY KEY (name));
The application starts fine, but whenever there is a value for the 'groups' column that is greater than 255 length, I get an error :
Caused by: org.postgresql.util.PSQLException: ERROR: value too long for type VARCHAR(255)
In the sql script, I have mentioned the column 'groups' as 'varchar' which should not restrict the length so I am not sure why am I getting this error.
There isn't an implicit default limit on varchar in CockroachDB. This error indicates that the groups column was initialized with the type varchar(255) when the table was created. Running SHOW CREATE TABLE sample_table; should confirm this.
It's possible that something unexpected is going on in the flyway and the table is not being created how you want it to be created.

ERROR: relation "schema.TableName_Id_seq" does not exist - when creating table in a new database

I'm having an issue where I used pgAdmin4's GUI to create a SQL table, and I want to use to generated CREATE TABLE script to create this same table in another database.
When I run the CREATE TABLE script generated by pgAdmin4 in my new database, I get the following error:
ERROR: relation "schema.TableName_Id_seq" does not exist
So, it appears that the issue is with my auto-incrementing id column that I created as type SERIAL.
The CREATE TABLE script as provided by pgAdmin4:
-- Table: myschema.TableName
-- DROP TABLE myschema."TableName";
CREATE TABLE myschema."TableName"
(
"Id" integer NOT NULL DEFAULT nextval('myschema."TableName_Id_seq"'::regclass),
/* Other columns here */
CONSTRAINT "TableName_pkey" PRIMARY KEY ("Id")
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE myschema."TableName"
OWNER to JoshuaSchlichting;
Why can't the CREATE TABLE script be used in another database? The relation "schema.TableName_Id_seq" didn't exist in the original database prior to be creating that table. What's happening that is different?
The DDL script provided by pgAdmin4 is not complete. When the table was created, there was an implicit creation of a sequence because of the SERIAL type being select for the Id column.
You can find this newly create sequence with pgAdmin4. To do this, go to
-> your server
-> your database
-> your schema
-> Sequences
-> Right click TableName_Id_seq
-> choose "Create script"
This reveals the script used to create this sequence. In this instance, the following was revealed:
-- SEQUENCE: myschema.TableName
-- DROP SEQUENCE myschema."TableName";
CREATE SEQUENCE myschema."TableName"
INCREMENT 1
START 1
MINVALUE 1
MAXVALUE 2147483647
CACHE 1;
The use of the CREATE SEQUENCE script can be avoided by changing the line of code used to create the Id column in the CREATE TABLE script. Example below:
original line:
"Id" integer NOT NULL DEFAULT nextval('myschema."TableName_Id_seq"'::regclass),
changed to: "Id" SERIAL NOT NULL,

H2 database unuseable when table contains 9 million records

I am using H2 as an embedded database started with AUTO_SERVER=TRUE.
At the very start I had only a few records and performance wasn't an issue even with no indexes defined.
When the table had more records performance seriously degraded and this was resolved by adding an index.
The db then performed very well until recently when the no. of records reached more than 8 million and now I have been unable to get any normal performance out of the DB and have tried changing cache_size etc... but no improvement.
I have seen posts that people are using H2 with many millions and even billions of records so is there something basic I am missing? Even basic queries such as select count(*) from HISTORICALDATA2 take so long that I end up cancelling the query.
Here is the table definition:
CREATE TABLE "PUBLIC"."HISTORICALDATA2"
(
REQUESTID integer,
SYMBOL varchar(50) NOT NULL,
EXCHANGE varchar(20),
SECTYPE varchar(10),
CURRENCYNAME varchar(5),
ENDDATETIME varchar(20),
DURATION varchar(20),
BARSIZE varchar(20),
WHATTOSHOW varchar(20),
USERTH integer,
FORMATDATE integer,
CHARTOPTIONS varchar(50),
DATETIMEDATA timestamp,
OPEN_PRICE decimal(20,2),
HIGH_PRICE decimal(20,2),
LOW_PRICE decimal(20,2),
CLOSE_PRICE decimal(20,2),
VOLUME integer,
COUNT_FIELD integer,
WAP integer,
HASGAPS boolean,
TSTAMP timestamp DEFAULT CURRENT_TIMESTAMP()
);
And the index:
CREATE INDEX HD_MAIN ON "PUBLIC"."HISTORICALDATA2"
(
SYMBOL,
EXCHANGE,
ENDDATETIME,
WHATTOSHOW,
DURATION,
BARSIZE
);
H2 and Production
H2 is not usually used as a production database. See Are there any reasons why h2 database shouldn't be used in production? for more details. Many of the answers give valid reasons for not doing this.
Migrating to Another DB
You can migrate your records away from h2 and move them to Postgres, MySQL, or Oracle.
See: How to convert H2Database database file to MySQL database .sql file?

How do I create a postrgresql database using SQL's DDL on pgadmin4?

I'm learning DDL to create and define an SQL database with Postgresql 10.
I have the something like the following SQL code in an .sql file, and I want to input it in psql or PgAdmin 4, just to test the syntax and see the database structure:
CREATE DATABASE database;
CREATE TYPE t_name AS
( first VARCHAR(30),
last VARCHAR(60)
);
CREATE TABLE telephone_m
( tnumber VARCHAR(15) NOT NULL UNIQUE
);
CREATE TABLE people
( curp CHAR(18) NOT NULL PRIMARY KEY,
pname t_name NOT NULL,
birth_date DATE NOT NULL,
telephone_m VARCHAR(15) REFERENCES telephone_m
);
CREATE TABLE clients
( curp CHAR(18) NOT NULL PRIMARY KEY,
cid SERIAL NOT NULL REFERENCES cards,
clocation VARCHAR(29)
) INHERITS (people);
CREATE TABLE cards
( cid BIGSERIAL NOT NULL PRIMARY KEY,
curp CHAR(18) NOT NULL REFERENCES clients,
trips SMALLINT,
distance NUMERIC,
points NUMERIC
);
CREATE TABLE drivers
( curp CHAR(18) NOT NULL PRIMARY KEY,
rfc CHAR(22) NOT NULL UNIQUE,
adress t_adress NOT NULL
) INHERITS (people);
I've tried in PgAdmin 4 making right-click on a new database -> CREATE Script, it opens Query Editor, I copy paste my code and execute it, but it returns:
ERROR: CREATE DATABASE cannot be executed from a function or multi-command string
SQL state: 25001
I've also tried using Query Tool directly from the PgAdmin tools menu with the same results.
The database is created just fine. But if you want to create object in the new DB, you have to connect to it. In any client, including pgAdmin4.
And you cannot run CREATE DATABASE inside of a transaction. Must be committed on it's own. Executing multiple commands at once is automatically wrapped into a single transaction in pgAdmin.
You have to execute CREATE DATABASE mydb; on its own (for instance by selecting only that line and pressing F5 while being connected to any DB, even the maintenance db "postgres". Then click on "Databases" in the object browser in the pgadmin4 main window / tab, hit F5 to refresh the view, click on the new DB, open up a new query tool with the flash icon (in a new window / tab) and execute the rest of your script there.
psql scripts manage by using the meta-command \c to connect to the new db after creating it, within the same session.
Asides:
"database" is no good name for a database.
CREATE TYPE AS (...), but just CREATE TABLE (...). No AS.
And you typically don't want to use the data type CHAR(18). See:
Any downsides of using data type "text" for storing strings?
Get sum of integers for UNIQUE ids
What is the overhead for varchar(n)?
Should I add an arbitrary length limit to VARCHAR columns?
There is the ; missing after the CREATE DATABASE database (and perhaps give the db a better name).

What really happens when I use varchar(10) in the sqlite command-line shell?

I'm messing around with SQLite for the first time by working through some of the SQLite documentation. In particular, I'm using Command Line Shell For SQLite and the SoupToNuts SQLite Tutorial on Sourceforge.
According to the SQLite datatype documentation, there are only 5 datatypes in SQLite. However, in the two tutorial documents above, I see where the authors use commands such as
create table tbl1(one varchar(10), two smallint);
create table t1 (t1key INTEGER PRIMARY KEY,data TEXT,num double,timeEnter DATE);
which contain datatypes that aren't listed by SQLite, yet these commands work just fine.
Additionally, when I ran .dump to see the SQL statements, these datatype specifications are preserved:
sqlite> CREATE TABLE Vulnerabilities (
...> VulnerabilityID unsigned smallint primary key,
...> VulnerabilityName varchar(10),
...> VulnerabilityDescription longtext);
sqlite> .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE Vulnerabilities (
VulnerabilityID unsigned smallint primary key,
VulnerabilityName varchar(10),
VulnerabilityDescription longtext);
COMMIT;
sqlite>
So, what gives? Does SQLite keep a reference for any datatype specified in the SQL yet converts it behind the scenes to one of its 5 datatypes? Or is there something else I'm missing?
SQLite uses dynamic typing.
SQLite will allow you to insert an integer into that VARCHAR(10) column.
SQLite will not complain if insert a string longer than 10 characters into that column.
As el.pescado mentions, SQLite has storage classes AKA "affinities".
If you attempt to insert a column belongs to a particular affinity, then SQLite will try to convert that value to match the affinity.
If the conversion doesn't work, the value is inserted as-is.
So while your more granular datatypes are saved (apparently) to the metadata table, they are not being used by SQLite.
There are not five datatypes, rather 5 datatype "classes" that "real" datatypes fall into. So that, TINYINT, SMALLINT and BIGINT are three different datatypes, but all belonging to the INTEGER storage class.