The query below does not work.
CREATE TABLE test_table (date varchar, id varchar, PRIMARY KEY (date,id))
I can't seem to find any docs on primary keys in Trino.
You don't. As description says:
Trino is a distributed SQL query engine designed to query large data sets distributed over one or more heterogeneous data sources.
Trino does not maintain primary keys, indexes and so on. See also use cases and Trino concepts.
Related
I know that hive cannot create surrogate keys or is rather difficult. I want to understand how companies have implemented dimensional modeling in their warehouse.
One way I can think of is leaving the dimension details as is in fact. Then move the distinct of dimension to a different table. But then how are scd1 and scd2 handled. I have checked talks by Kimball on cloudera and I still don't understand how this works.
There are two ways of handling this problem in Hive.
The first does not directly answer your question, and that is to use natural keys instead of surrogates. While surrogates are more convenient and performant, since you're using Hive I'm guessing that performance isn't one of your major criteria, so the cost of using natural keys will just be in the extra lines of code you have to write to cater for compound keys.
The second way is to use Hive's windowing functions to calculate the surrogate. I don't have a Hive environment handy to test this query, but the surrogate would look something like:
(select max(surrogate_key_column) from dimension_table)
+ row_number() over (order by 1)
As far as I know, In version 3.0, Hive supports the surrogate keys on ACID tables
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/using-hiveql/content/hive_surrogate_keys.html
Summarised from the link:
The SURROGATE_KEY UDF generates a unique Id for every row that you insert into a table.
Example usage:
-Create a table
CREATE TABLE students_v2
(`ID` BIGINT DEFAULT SURROGATE_KEY(),
row_id INT,
name VARCHAR(64),
dorm INT,
PRIMARY KEY (ID) DISABLE NOVALIDATE);
-Insert data, which automatically generates surrogate keys for the primary keys.
INSERT INTO students_v2 (row_id, name, dorm) SELECT * FROM students;
-Take a look at the surrogate keys.
SELECT * FROM students_v2;
I was told to create an autID identity column in the table with GUID varchar(40) as the primary key and use the autID column as a reference key to help in the join process. But is that a good approach?
This causes a lot of problems like this
CREATE TABLE OauthClientInfo
(
autAppID INT IDENTITY(1,1)
strClientID VARCHAR(40), -- GUID
strClientSecret VARCHAR(40)
)
CREATE TABLE OAuth_AuthToken
(
autID INT IDENTITY(1,1)
strAuthToken VARCHAR(40),
autAppID_fk INT
FOREIGN KEY REFERENCES OauthClientInfo(autAppID)
)
I was told that having autAppID_fk helps in the joins vs having strClientID_fk of varchar(40), but my point to defend is we unnecessarily adding a new id as a reference that some times forces to make joins.
Example, to know what is the strClientID that the strAuthToken belongs, if we have strClientID_fk as the reference key then the OAuth_AuthToken table data make sense a lot for me. Please comment your views on this.
I was told to create an autID identity column in the table with GUID varchar
(40) as the primary key and use the autID column as a reference key to help in the join process. But is that a good approach?
You were told this by someone that confuses clustering and primary keys. They are not one and the same, despite the confusing implementation of the database engine that "helps" the lazy developer.
You might get arguments about adding an identity column to every table and designating it as the primary key. I'll disagree with all of this. One does not BLINDLY do anything of this type in a schema. You do the proper analysis, identify (and enforce) any natural keys, and then you decide an whether a synthetic key is both useful and needed. And then you determine which columns to use for the clustered index (because you only have one). And then you verify the appropriateness of your decisions based on how efficient and effective your schema is under load by testing. There are no absolute rules about how to implement your schema.
Many of your indexing (and again note - indexing and primary key are completely separate things) choices will be affected by how your tables are updated over time. Do you have hotspots that need to be minimized? Does your table experience lots of random inserts, updates, and deletes over time? Or maybe just lots of inserts but relatively few updates or deletes? These are just some of the factors that guide your decision.
You need to use UNIQUEIDENTIFIER data type for GUID columns not VARCHAR
As far as I have read, Auto increment int is the most suitable column for clustered index.
And strClientID is the worst candidate for PK or cluster index.
Most importantly you haven't mention the purpose of StrClientID. What kind of data does it hold, how does it get populated?
If I create two tables and I want to set one column as foreign key to another table column why the hell am I allowed to set foreign key column datatype?
It just doesn't make any sense or am I missing something? Is there any scenario where column with foreign keys has different datatype on purpose?
Little more deeper about my concerns, I tried to use pgadmin to build some simple Postgres DB. I made first table with primary key serial datatype. Then I tried to make foreign key but what datatype? I have seen somewhere serial is bigint unsigned. But this option doesn't even exists in pgadmin. Of course I could use sql but then why am I using gui? So I tried Navicat instead, same problem. I feel like with every choice I do another mistake in my DB design...
EDIT:
Perhaps I asked the question wrong way.
I was allowed to do build structure:
CREATE TABLE user
(
id bigint NOT NULL,
CONSTRAINT user_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
CREATE TABLE book
(
user integer,
CONSTRAINT dependent_user_fkey FOREIGN KEY (user)
REFERENCES user (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
I insert some data to table user:
INSERT INTO user(id)
VALUES (5000000000);
But I can't cast following insert:
INSERT INTO book(user)
VALUES (5000000000);
with ERROR: integer out of range which is understandable, but obvious design error.
And my question is: Why when we set CONSTRAINT, data types are not being validated. If I'm wrong, answer should contain scenario where it is useful to have different data types.
Actually it does make sense here is why:
In a table, you can in fact set any column as its primary key. So it could be integer, double, string, etc. Even though nowadays, we mostly use either integers or, more recently, strings as primary key in a table.
Since the foreign key is pointing to another table's primary key, this is why you need to specify the foreign key's datatype. And it obviously needs to be the same datatype.
EDIT:
SQL implementations are lax on this case as we can see: they do allow compatible types (INT and BIG INT, Float or DECIMAL and DOUBLE) but at your own risk. Just as we can see in your example, below.
However, SQL norms do specify that both datatypes must be the same.
If datatype is character, they must have the same length, otherwise, if it is integer, they must have the same size and must both be signed or both unsigned.
You can see by yourself over here, a chapter from a MySQL book published in 2003.
Hope this answers your question.
To answer your question of why you'd ever want different type for a foreign vs. primary key...here is one scenario:
I'm in a situation where an extremely large postgres table is running out of integer values for its id sequence. Lots of other, equally large tables have a foreign key to that parent table.
We are upsizing the ID from integer to bigint, both in the parent table and all the child tables. This requires a full table rewrite. Due to the size of the tables and our uptime commitments and maintenance window size, we cannot rewrite all these tables in one window. We have about three months before it blows up.
So between maintenance windows, we will have primary keys and foreign keys with the same numeric value, but different size columns. This works just fine in our experience.
Even outside an active migration strategy like this, I could see creating a new child table with a bigint foreign key, with the anticipation that "someday" the parent table will get its primary key upsized from integer to bigint.
I don't know if there is any performance penalty with mismatched column sizes. That question is actually what brought me to this page, as I've been unable to find guidance on it online.
(Tangent: Never create any table with an integer id. Go with bigint, no matter what you think your data will look like in ten years. You're welcome.)
In a LDM I recently made, I have an entity which has the following structure:
Building_ID (Primary Key, Foreign Key),
Plant_ID (Foreign Key),
Build_Year (Primary Key),
Size
I need to create a table in a SQL database using this design. The question I'm running into is how do I handle the primary keys here? Is it OK for a SQL table to have multiple primary keys? If the answer to this question is yes, then which column should act as the unique index? Should I create a new column to act as the unique index identifier?
Any SQL table for any relational database system (SQL Server, Oracle, Firebird, IBM DB2, Sybase etc.) I know can only ever have one primary key - after all, it's the primary key - there can only ever be one.
However, a primary key can be made up from multiple columns (called a "compound primary key"). There are downsides such as: all foreign key constraints from other tables also must specify all columns in the compound PK, thus making joining the tables a bit of a pain (since you need to specify all equality constraints for all columns included in the key in your JOIN).
Besides a primary key, you can also have multiple alternate keys - other column(s) that could also identify the row uniquely. Those make excellent candidates for e.g. indices, if those can help you speed up access to the table (but don't over-index your tables! Less is more)
I have a table with 16 columns. It will be most frequently used table in web aplication and it will contain about few hundred tousand rows. Database is created on sql server 2008.
My question is choice for primary key. What is quicker? I can use complex primary key with two bigint-s or i can use one varchar value but i will need to concatenate it after?
There are many more factors you must consider:
data access prevalent pattern, how are you going to access the table?
how many non-clustered indexes?
frequency of updates
pattern of updates (sequential inserts, random)
pattern of deletes
All these factors, and specially the first two, should drive your choice of the clustered key. Note that the primary key and clustered key are different concepts, often confused. Read up my answer on Should I design a table with a primary key of varchar or int? for a lengthier discussion on the criteria that drive a clustered key choice.
Without any information on your access patterns I can answer very briefly and concise, and actually correct: the narrower key is always quicker (for reasons of IO). However, this response bares absolutely no value. The only thing that will make your application faster is to choose a key that is going to be used by the query execution plans.
A primary key which does not rely on any underlying values (called a surrogate key) is a good choice. That way if the row changes, the ID doesn't have to, and any tables referring to it (Foriegn Keys) will not need to change. I would choose an autonumber (i.e. IDENTITY) column for the primary key column.
In terms of performance, a shorter, integer based primary key is best.
You can still create your clustered index on multiple columns.
Why not just a single INT auto-generated primary key? INT is 32-bit, so it can handle over 4 billion records.
CREATE TABLE Records (
recordId INT NOT NULL PRIMARY KEY,
...
);
A surrogate key might be a fine idea if there are foreign key relationships on this table. Using a surrogate will save tables that refer to it from having to duplicate all those columns in their tables.
Another important consideration is indexes on columns that you'll be using in WHERE clauses. Your performance will suffer if you don't. Make sure that you add appropriate indexes, over and above the primary key, to avoid table scans.
What do you mean quicker? if you need to search quicker, you can create index for any column or create full text search. the primary key just make sure you do not have duplicated records.
The decision relies upon its use. If you are using the table to save data mostly and not retrieve it, then a simple key. If you are mostly querying the data and it is mostly static data where the key values will not change, your index strategy needs to optimize the data to the most frequent query that will be used. Personally, I like the idea of using GUIDs for the primary key and an int for the clustered index. That allows for easy data imports. But, it really depends upon your needs.
Lot’s of variables you haven’t mentioned; whether the data in the two columns is “natural” and there is a benefit in identifying records by a logical ID, if disclosure of the key via a UI poses a risk, how important performance is (a few hundred thousand rows is pretty minimal).
If you’re not too fussy, go the auto number path for speed and simplicity. Also take a look at all the posts on the site about SQL primary key types. Heaps of info here.
Is it a ER Model or Dimensional Model. In ER Model, they should be separate and should not be surrogated. The entire record could have a single surrogate for easy references in URLs etc. This could be a hash of all parts of the composite key or an Identity.
In Dimensional Model, also they must be separate and they all should be surrogated.