Multi valued column in SQL - sql

I created a table for Boat that contains the following columns: BName, Type, Price, OName.
However, the Type column should be one of the following: Sailboat, Houseboat, or Deckboat.
How can I reflect this on the create table statement. I've searched about it and I came up with this statement which I'm not sure if it's right or not:
CREATE TABLE Boat
(
BName varchar(255),
BType int,
Price double,
OName varchar(255),
PRIMARY KEY (BName),
FOREIGN KEY (BType) REFERENCES BoType(ID)
);
CREATE TABLE BoType
(
ID int PRIMARY KEY,
Type varchar(255)
)
Is this the best way to do it?

You can try something like this:
mycol VARCHAR(10) NOT NULL CHECK (mycol IN('moe', 'curley', 'larry'))
Here are more details on MSSQL "Check Constraints":
http://technet.microsoft.com/en-us/library/ms188258%28v=sql.105%29.aspx

That's the best way to do it just make sure that you populate the BoType table with the desired reference values (i.e. Sailboat, Houseboat, Deckboat). Because if you use constraint then you have the user of the database who have no knowledge in SQL or have no access rights in your DB, at your mercy or they become too dependent on you. However, if you set it as a separate table then the user of your system/database even without knowledge about SQL could add or change values via your front-end program (e.g. ASP, PHP). In other words you design is more flexible and scalable not to mention less maintenance in your part.

Related

Automatic uniqueidentifier during table design

I've created a new table and made column id the primary key and defined it as uniqueidentifier.
Is there a way during the tables design in SQL Server Management Studio to assign a rule that all new rows auto generate a new uniqueidentifier in the id column?
At the moment to make my form (made on Retool) write to the table I need to type out a random set of characters, essentially self creating my own uniqueidentifier which obviously isn't correct.
Avoid the designers, they've been a complete and utter mess for 17 years. Do this in a query window:
USE tempdb;
GO
CREATE TABLE dbo.what
(
id uniqueidentifier NOT NULL
CONSTRAINT DF_what_id DEFAULT(NEWSEQUENTIALID()),
-- or NEWID() if you like page splits
name nvarchar(128),
CONSTRAINT PK_what PRIMARY KEY (id)
);
INSERT dbo.what(name) VALUES(N'hi'),(N'there');
SELECT id, name FROM dbo.what;
Output (yours will have different values for id):
id
name
84c37c76-8c0e-ed11-ba5d-00163ef319ff
hi
85c37c76-8c0e-ed11-ba5d-00163ef319ff
there

PostgreSQL storage need of references

I'm heavily using references in a SQL layout and was wondering if that's a bad habit. As I declare a reference with varchar(20), does PostgreSQL doubles the storage usage or just uses a hidden ID to link the values?
An example:
create table if not exists distros(
name varchar(20),
primary key(name)
);
create table if not exists releases(
distro varchar(20) references distros(name),
name varchar(20),
primary key(distro, name)
);
create table if not exists targets(
distro varchar(20) references distros(name),
release varchar(20) references releases(name),
name varchar(20),
primary key (distro, release, name)
);
Is the distro value stored once or three times?
Thanks
I am affraid, that your column distro is not stored once or three times, but much more.
It is in each one of your tables. But on top of that you have made it as a part of primary key that in turn make it part of each index you will define for the table.
Create your tables this way. It will save you lot of space and will be faster.
create table if not exists distros(
id serial,
name varchar(20),
primary key(id)
);
create table if not exists releases(
id serial,
distro_id int references distros(id),
name varchar(20),
primary key(id)
);
create table if not exists targets(
id serial,
distro_id int references distros(id),
release_id int references releases(id),
name varchar(20),
primary key (id)
);
your data is repeated. Using the foreign key constraint (a.k.a the "references") simply means that you can not have a value in the column if it doesn't exist in the referenced column.
This tutorial is worth reading.
I do not know Postgres storage layout, but I think, each record should be stored completely in a so called datapage, so that in the case of tablescans (searching without usage of indexes) no additional dereferencing is necessary. that includes also all those referencing attributes.
Additionally it will be stored at least partly in each index, from where the referenced record will be found using some kind of record id, that depends on the indexing technology you are using. The normal B(*)Trees will work in such a way.
So the answer is at least three times and in a kind of cumulating way in each index used to search for the referenced records.

Can I use a trigger to create a column?

As an alternative to anti-patterns like Entity-Attribute-Value or Key-Value Pair tables, is it possible to dynamically add columns to a data table via an INSERT trigger on a parameter table?
Here would be my tables:
CREATE TABLE [Parameters]
(
id int NOT NULL
IDENTITY(1,1)
PRIMARY KEY,
Parameter varchar(200) NOT NULL,
Type varchar(200) NOT NULL
)
GO
CREATE TABLE [Data]
(
id int NOT NULL
IDENTITY(1,1)
PRIMARY KEY,
SerialNumber int NOT NULL
)
GO
And the trigger would then be placed on the parameter table, triggered by new parameters being added:
CREATE TRIGGER [TRG_Data_Insert]
ON [Parameters]
FOR INSERT
AS BEGIN
-- The trigger takes the newly inserted parameter
-- record and ADDs a column to the data table, using
-- the parameter name as the column name, the data type
-- as the column data type and makes the new column
-- nullable.
END
GO
This would allow my data mining application to get a list of parameters to mine and have a place to store that data once it mines it. It would also allow a user to add new parameters to mine dynamically, without having to mess with SQL.
Is this possible? And if so, how would you go about doing it?
I think the idea of dynamically adding columns will be a ticking time bomb, just gradually creeping towards one of the SQL Server limits.
You will also be putting the database design in the hands of your users, leaving you at the mercy of their naming conventions and crazy ideas.
So while it is possible, is it better than an EAV table, which is at least obvious to the next developer to pick up your program؟

Constraint to table column name (postgresql)

I'm implementing a product database using the single table inheritance (potentially later class table inheritance) model for product attributes. That's all working fine and well but I'm trying to figure out how best to deal with product variants whilst maintaining referential integrity.
Right now a simplified version of my main product table looks like this:
CREATE TABLE product (
id SERIAL NOT NULL,
name VARCHAR(100) NOT NULL,
brand VARCHAR(40) NOT NULL,
color VARCHAR(40)[] NOT NULL
)
(color is an array so that all of the standard colors of any given product can be listed)
For handling variants I've considered tracking the properties on which products vary in a table called product_variant_theme:
CREATE TABLE product_variant_theme (
id SERIAL NOT NULL,
product_id INT NOT NULL,
attribute_name VARCHAR(40) NOT NULL
)
Wherein I insert rows with the product_id in question and add the column name for the attribute into the attribute_name field e.g. 'color'.
Now feel free to tell me if this is an entirely stupid way to go about this in the first place, but I am concerned by the lack of a constraint between attribute_name and the actual column name itself. Obviously if alter the product table and remove that column I might still be left with rows in my second table that refer to it. The functional equivalent of what I'm looking for would be something like a foreign key on attribute_name to the information_schema view that describes the tables, but I don't think there's any way to do that directly, and I'm wondering if there is any reasonable way to get that kind of functionality here.
Thanks.
Are you looking for something like this?
product
=======
id
name
attribute
=========
id
name
product_attribute_map
=====================
product_id
attribute_id
value

Variable amount of sets as SQL database tables

More of a question concerning the database model for a specific problem. The problem is as follows:
I have a number of objects that make up the rows in a fixed table, they are all distinct (of course). One would like to create sets that contain a variable amount of these stored objects. These sets would be user-defined, therefore no hard-coding. Each set will be characterized by a number.
My question is: what advice can you experienced SQL programmers give me to implement such a feature. My most direct approach would be to create a table for each such set using table-variables or temporary tables. Finally, an already present table that contains the names of the sets (as a way to let the user know what sets are currently present in the database).
If not efficient, what direction would I be looking in to solve this?
Thanks.
Table variables and temporary tables are short lived, narrow of scope and probably not what you want to use for this. One table for each Set is also not a solution I would choose.
By the sound of it you need three tables. One for Objects, one for Sets and one for the relationship between Objects and Sets.
Something like this (using SQL Server syntax to describe the tables).
create table [Object]
(
ObjectID int identity primary key,
Name varchar(50)
-- more columns here necessary for your object.
)
go
create table [Set]
(
SetID int identity primary key,
Name varchar(50)
)
go
create table [SetObject]
(
SetID int references [Object](ObjectID),
ObjectID int references [Set](SetID),
primary key (SetID, ObjectID)
)
Here is the m:m relation as a pretty picture: