Please guide me if I'm on right track.
I'm trying to create database schema for Mobile Bill for a person X and how to define PK, FK for the table Bill_Detail_Lines.
Here are the assumptions:
Every customer will have a unique relationship number.
Bill_no will be unique as it is generated every month.
X can call to the same mobile no every month.
Account_no is associated with every mobile no and it doesn't change.
Schema:
table: Bill_Headers
Relationship_no - int, NOT NULL , PK
Bill_no - int, NOT NULL , PK
Bill_date - varchar(255), NOT NULL
Bill_charges - int, NOT NULL
table: Bill_Detail_Lines
Account_no - int, NOT NULL
Bill_no - int, NOT NULL , FK
Relationship_no - int, NOT NULL, FK
Phone_no - int, NOT NULL
Total_charges - int
table: Customers
Relationship_no - int, NOT NULL, PK
Customer_name - varchar(255)
Address_line_1 - varchar(255)
Address_line_2 - varchar(255)
Address_line_3 - varchar(255)
City - varchar(255)
State - varchar(255)
Country - varchar(255)
I would recommend having a primary key for Bill_Detail_Lines. If each line represents a total of all calls made to a given number, then the natural PK seems to be (Relationship_no, Bill_no, Phone_no), or maybe (Relationship_no, Bill_no, Account_no).
If each line instead represents a single call, then I would probably add a Line_no column and make the PK (Relationship_no, Bill_no, Line_no).
Yes, as for me, everything looks good.
I have to disagree, there's a couple of 'standards' which aren't being followed. Yes the design looks ok, but the naming convention isn't appropriate.
Firstly, table names should be singular (many people will disagree with this).
If you have a single int, PK on a table, the standard is to call it 'ID', thus you have "SELECT Customer.ID FROM Customer" - for instance. You also then fully qualify the FK columns, for instance: CustomerID on Bill_Headers instead of Relationship_no which you then have to check in the table definition to remember what it's related to.
Something I also always keep in mind, is to make the column header as clear and short as possible without obfuscating the name. For instance, "Bill_charges" on Bill_Headers could just be "Charges", as you're already on the Bill_Header(s) (<- damn that 's'), same goes for Date, but date could be a bit more descriptive, CreatedDate, LastUpdatedDate, etc...
Lastly, beware of hard-coding multiple columns where one would suffice, same other way around. Specifically I'm talking about:
Address_line_1 - varchar(255)
Address_line_2 - varchar(255)
Address_line_3 - varchar(255)
This will lead to headaches later. SQL does have the capability to store new line characters in a string, thus combining them to one "Address - varchar(8000)" would be easiest. Ideally this would be in a separate table, call it Customer_Address with int "CustomerID - int PK FK" column where you can enter specific information.
Remember, these are just suggestions as there's no single way of database design that everyone SHOULD follow. These are best practices, at the end of the day it's your decision to make.
There are a few mistakes:
Realtionship_no and Bill_no are int. Make sure that the entries are within the range of integer. It is better to take them as varchar() or char()
Bill_date should be in data type Date
In table Bill_Detail_Lines also, it is better to have Account_no as varchar() or char() because of the long account no. And the same goes with Phone_no.
Your Customers table is all fine except that you have taken varchar() size as 255 for City State and Country which is too large. You can work with smaller size also.
Related
I'm creating a table and I need a check constraint to validate the posibles values given a string value. I'm creating this table:
CREATE TABLE cat_accident (
acc_type VARCHAR(30) NOT NULL CHECK(acc_type = 'Home accident' OR acc_type = 'Work accident'),
acc_descrip VARCHAR(30) NOT NULL
);
So basically I want to validate if acc_type is equal to Home accident, then acc_descrip can be or 'Intoxication' OR 'burns' OR 'Kitchen wound', OR if acc_type is equal to Work Accident, then acc_descrip can be OR 'freezing' OR 'electrocution'.
How do I write that constraint?
Use a CHECK constraint with a CASE expression:
CREATE TABLE cat_accident (
acc_type VARCHAR(30) NOT NULL,
acc_descrip VARCHAR(30) NOT NULL
CHECK(
CASE acc_type
WHEN 'Home accident' THEN acc_descrip IN ('Intoxication', 'burns', 'Kitchen wound')
WHEN 'Work accident' THEN acc_descrip IN ('freezing', 'electrocution')
END
)
);
See the demo.
I'd suggest implementing this with a lookup table:
CREATE TABLE l_accident_description(
description_id VARCHAR(5) PRIMARY KEY,
description_full VARCHAR(30) NOT NULL UNIQUE,
location VARCHAR(30)
);
INSERT INTO l_accident_description
(description_id,description_full,location)
VALUES
('INTOX','Intoxication','Home Accident'),
('BURNS','Burns','Home Accident'),
('K_WND','Kitchen wound','Home Accident'),
('FREEZ','Freezing','Work Accident'),
('ELECT','Electrocution','Work Accident');
That way you can encode the relationship you want to encode into cat_accident, but if the details ever change, it's only a matter of inserting/deleting/updating rows in your lookup table. This implementation has the added benefit that you're not storing as much data repetitively in your table (just a VARCHAR(5) code rather than a VARCHAR(30) string). The table construction then becomes (with added primary key):
CREATE TABLE cat_accident (
cat_accident_id PRIMARY KEY,
acc_descrip VARCHAR(5) NOT NULL REFERENCES l_accident_description(description_id)
);
Any time you wanted to know whether the accident Home/Work, this could be accomplished with a query joining the lookup table. Joining lookup tables is more in the spirit of good database construction, rather than hard-coding checks to tables that may easily change or grow more complex as the database grows.
In fact, the ideal solution might be to create two lookup tables here, with l_accident_description in turn referencing a location lookup, but for simplicity's sake I've shown how it might be accomplished with one.
I have to create this project for one of my assignments and I have to use Transact-SQL to create this database. I have changed this below code many times to avoid this error but its still keep coming and can't think of any reason for this.
CREATE TABLE Job
(
Job_ID INT NOT NULL IDENTITY (500, 1),
Pickup_Address VARCHAR(255),
Destination_Address VARCHAR(255),
Crew_Name VARCHAR(255),
Customer_Name VARCHAR(255),
PRIMARY KEY(Job_ID)
)
GO
INSERT INTO Job(Pickup_Address,Destination_Address,Crew_Name,Customer_Name)
VALUES ('Ceylinco Centre Building, 3rd Floor, Nawam Mawatha','NO.50, Sumanagala Road,Rathmalana','Maharagama Crew 1','Dilan'),
('3/82, ST.JUDE LANE,Dalugama','19A, 4th Lane,Koswatthe Rd','Maharagama Crew 2','Kasun'),
('19 Saunders Place,Colombo 12','54/3, Elapitiwala,Ragama','Kottawa Crew 1','Kelly'),
('381 Prince of Wales Avenue,Colombo 14','51, SEA STREET,Colombo','Kottawa Crew 2','Kasun'),
('250 Galle Road,Colombo 03','34, Pepiliyana Road,Nugegoda','Nugegoda Crew 2','Alan')
GO
CREATE TABLE Loads
(
Load_ID INT NOT NULL IDENTITY (1, 1),
Job_ID INT NOT NULL FOREIGN KEY REFERENCES Job(Job_ID),
Load_Type VARCHAR(255),
Product_Name VARCHAR(255),
Loaded_Time DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
PRIMARY KEY (Load_ID, Job_ID),
)
GO
I have executed the above SQL code, previously I couldn't create the Loads table and got an error
Invalid referencing table
After I tried to avoid this now it can be executed although there is an error and it's in below. what I need to know is the reason for this and will it be a problem if I keep this as it is. Thank you
Foreign key 'FK_Loads_c6b32eef601ef719ed3' References invalid table 'job'
The code works, you may have the case sensitive collation. To know the collation you can run this code:
SELECT SERVERPROPERTY('collation');
If this is the case, you can change it (2008 or higher, except azure) with code like the following (adding the desired collation):
ALTER DATABASE YOURS_DATABASE
COLLATE Modern_Spanish_CS_AS ;
more information here here
I imported 11 Million location names from geonames.org into my postgresql. However when I try to just view the data for instance in TablePlus it is extremely slow. Executing a simple select for one row, takes like 2 minutes. What can I do with large data, so that it won't be too slow and I can select it very fast?
I think I don't have any indexes, would that make a difference?
This is my table:
create table geoname (
geonameid int,
name varchar(200),
asciiname varchar(200),
alternatenames text,
latitude float,
longitude float,
fclass char(1),
fcode varchar(10),
country varchar(2),
cc2 varchar(120),
admin1 varchar(20),
admin2 varchar(80),
admin3 varchar(20),
admin4 varchar(20),
population bigint,
elevation int,
gtopo30 int,
timezone varchar(40),
moddate date
);
You need to specify what the query looks like.
Indexes would definitely make a difference. But the type of index depends on the query you are using and the columns used for selecting one or more rows.
The place to start is by defining a primary key on the table. Presumably, geonameid is the primary key. You can do this:
alter table geonames add constraint pk_geonames_geonameid primary key (geonameid);
You should really do this when you create the table, but better late than never.
If you are searching by geonameid, then you will notice a significant speed-up.
If you want to search by other columns, such as name or asciiname, then add indexes for those:
create index idx_geonames_name on geonames(name);
create index idx_geonames_asciiname on geonames(aciiname);
This doesn't work for all searches. If your criteria is like with wildcards, you may need a different indexing strategy. Similarly, if it is by latitude and longitude, you'll want a GIS index.
I am designing a relational database that includes currency exchange rates. My first intuition was to have a currency table that lists all the currencies and holding as attributes the exchange rate for every other currency.
Ex:
Currency Table
id INT
base_currency_code VARCHAR
date DATE
USD DECIMAL
GBP DECIMAL
EUR DECIMAL
I've done a search online on how currency exchange tables are implemented in databases, and everywhere I've looked instead has a table currency_exchange with columns base_currency, non_base_currency (to currency), rate, and date.
Since I can't find an implementation similar to mine and since one post I read said the other implementation is how it's done at the financial company he works at, I assume that the other is superior to mine!
I just don't understand why. Could someone please explain why the other is better?
You want tables like something this:
create table Currencies (
CurrencyId int primary key,
ISO_Code varchar(3),
Name varchar(255)
);
create table CurrencyExchangerate (
CurrencyExchangerateId int primary key,
FromCurrencyId int not null references Currencies(CurrencyId),
ToCurrencyId int not null references Currencies(CurrencyId),
RateDate date not null,
ExchangeRate decimal(20, 10),
constraint unq_currencyexchangerate_3 unique (FromCurrencyId, ToCurrencyId, RateDate)
);
I have to create a table in sql where one of the columns stores awards for a movie. The schema says it should store something like Oscar, screenplay. Is it possible to store two values in the same field in SQL. If so what datatype would that be and how would you query the table for it?
It's a horrible design pattern to store more than one piece of data in a single column in a relational database. The exact design of your system depends on several things, but here is one possible way to model it:
CREATE TABLE Movie_Awards (
movie_id INT NOT NULL,
award_id INT NOT NULL,
CONSTRAINT PK_Movie_Awards PRIMARY KEY CLUSTERED (movie_id, award_id)
)
CREATE TABLE Movies (
movie_id INT NOT NULL,
title VARCHAR(50) NOT NULL,
year_released SMALLINT NULL,
...
CONSTRAINT PK_Movies PRIMARY KEY CLUSTERED (movie_id)
)
CREATE TABLE Awards (
award_id INT NOT NULL,
ceremony_id INT NOT NULL,
name VARCHAR(50) NOT NULL, -- Ex: Best Picture
CONSTRAINT PK_Awards PRIMARY KEY CLUSTERED (award_id)
)
CREATE TABLE Ceremonies (
ceremony_id INT NOT NULL,
name VARCHAR(50) NOT NULL, -- Ex: "Academy Awards"
nickname VARCHAR(50) NULL, -- Ex: "Oscars"
CONSTRAINT PK_Ceremonies PRIMARY KEY CLUSTERED (ceremony_id)
)
I didn't include Foreign Key constraints here, but hopefully they should be pretty obvious.
Anything's possible; that doesn't mean it's a good idea :)
Far better to normalize your structure and store types like so:
AwardTypes:
AwardTypeID
AwardTypeName
Movies:
MovieID
MovieName
MovieAwardType:
MovieID
AwardTypeID
You can serialize your data in Json format,store Json string, and deselialize on read. More sefer than using your own format
Data presentation does't have to be so close tied with phisical data organisation. Wouldn't it be bether to store these two data in two separate columns and then just do some kind of concatenation at the display time?
It is much less painfull to join data than to split it, if you happen to need just a screenplay, one day...