I have this schema
RESTAURANT
(id int not null,
name varchar(50),
place varchar(100),
distance float,
a varchar(50),
b varchar(50),
c varchar(50),
d varchar(50),
PRIMARY KEY (id))
and I'm tuning a search function for this table.
a,b,c,d are different field used in the research, but what I need to focus on is place and the distance because most of the query are actually performed on the combination of this two field
I'm using db2, and I'm not really skilled, suggestion where to start from?
What you need is to use indexes. You can execute the the Desing Advisor to see what DB2 proposes about indexes:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.perf.doc/doc/c0005144.html
For more information about indexes, you can take a look at:
http://www.ibm.com/developerworks/data/library/dmmag/DMMag_2010_Issue4/DataArchitect/index.html
Related
I imported 11 Million location names from geonames.org into my postgresql. However when I try to just view the data for instance in TablePlus it is extremely slow. Executing a simple select for one row, takes like 2 minutes. What can I do with large data, so that it won't be too slow and I can select it very fast?
I think I don't have any indexes, would that make a difference?
This is my table:
create table geoname (
geonameid int,
name varchar(200),
asciiname varchar(200),
alternatenames text,
latitude float,
longitude float,
fclass char(1),
fcode varchar(10),
country varchar(2),
cc2 varchar(120),
admin1 varchar(20),
admin2 varchar(80),
admin3 varchar(20),
admin4 varchar(20),
population bigint,
elevation int,
gtopo30 int,
timezone varchar(40),
moddate date
);
You need to specify what the query looks like.
Indexes would definitely make a difference. But the type of index depends on the query you are using and the columns used for selecting one or more rows.
The place to start is by defining a primary key on the table. Presumably, geonameid is the primary key. You can do this:
alter table geonames add constraint pk_geonames_geonameid primary key (geonameid);
You should really do this when you create the table, but better late than never.
If you are searching by geonameid, then you will notice a significant speed-up.
If you want to search by other columns, such as name or asciiname, then add indexes for those:
create index idx_geonames_name on geonames(name);
create index idx_geonames_asciiname on geonames(aciiname);
This doesn't work for all searches. If your criteria is like with wildcards, you may need a different indexing strategy. Similarly, if it is by latitude and longitude, you'll want a GIS index.
I am new to php and sql and I am building a little game to learn a little bit more of the latter.
This is my simple database of three tables:
-- *********** SIMPLE MONSTERS DATABASE
CREATE TABLE monsters (
monster_id VARCHAR(20),
haunt_spawn_point VARCHAR(5) NOT NULL,
monster_name VARCHAR(30) NOT NULL,
level_str VARCHAR(10) NOT NULL,
creation_date DATE NOT NULL,
CONSTRAINT monster_id_pk PRIMARY KEY (monster_id)
);
-- ****************************************
CREATE TABLE spawntypes (
spawn_point VARCHAR(5),
special_tresures VARCHAR (5) NOT NULL,
maximum_monsters NUMBER NOT NULL,
unitary_experience NUMBER NOT NULL,
CONSTRAINT spawn_point_pk PRIMARY KEY (spawn_point)
);
-- ****************************************
CREATE TABLE fights (
fight_id NUMBER,
my_monster_id VARCHAR(20),
foe_spawn_point VARCHAR(5),
foe_monster_id VARCHAR(20) NOT NULL,
fight_start TIMESTAMP NOT NULL,
fight_end TIMESTAMP NOT NULL,
total_experience NUMBER NOT NULL
loot_type NUMBER NOT NULL,
CONSTRAINT my_monster_id_fk FOREIGN KEY (my_monster_id)
REFERENCES monsters (monster_id),
CONSTRAINT foe_spawn_point_fk FOREIGN KEY (foe_spawn_point)
REFERENCES spawntypes (spawn_point),
CONSTRAINT fight_id_pk PRIMARY KEY (fight_id)
);
Given this data how can I easily carry out this two tasks:
1) I would like to create a pl/sql function that passing only a fight_id as a parameter and given the foe_spawn_point (inside the fight table) return the unitary_experience that is related to this spawn point referencing the spawntypes table, how can I do it? :-/ [f(x)]
In order to calculate the total experience earned from a fight (unitary_experience * fight_length) I have created a function that given a particular fight will subtract the fight_end with the fight_start so now I know how long the fight lasted. [f(y)]
2) is it possible to use this two functions (multiply the result that they returns) during the database population task?
INSERT INTO fights VALUES(.... , f(x) * f(y), 'loot A');
in order to populate all the total_experience entries inside the fights table?
thank you for your help
In SQL, you don't generally talk about building functions to do things. The building blocks of SQL are queries, views, and stored procedures (most SQL dialects do have functions, but that is not the place to start).
So, given a variable with $FIGHTID you would fetch the unitary experience with a simple query that uses the join operation:
select unitary_experience
from fight f join
spawnTypes st
on st.spawn_point = f.foe_spawn_point
where fightid = $FIGHTID
If you have a series of values to insert, along with a function, I would recommend using the select form of insert:
insert into fights(<list of columns, total_experience)
select <list of values>,
($FIGHT_END - $FIGHT_START) * (select unitary_experience from spawnTypes where spawnType ='$SPAWN_POINT)
One comment about the tables. It is a good idea for all the ids in the table to be integers that are auto-incremented. In Oracle you do this by creating a sequence (and it is simpler in most other databases).
I have to create a table in sql where one of the columns stores awards for a movie. The schema says it should store something like Oscar, screenplay. Is it possible to store two values in the same field in SQL. If so what datatype would that be and how would you query the table for it?
It's a horrible design pattern to store more than one piece of data in a single column in a relational database. The exact design of your system depends on several things, but here is one possible way to model it:
CREATE TABLE Movie_Awards (
movie_id INT NOT NULL,
award_id INT NOT NULL,
CONSTRAINT PK_Movie_Awards PRIMARY KEY CLUSTERED (movie_id, award_id)
)
CREATE TABLE Movies (
movie_id INT NOT NULL,
title VARCHAR(50) NOT NULL,
year_released SMALLINT NULL,
...
CONSTRAINT PK_Movies PRIMARY KEY CLUSTERED (movie_id)
)
CREATE TABLE Awards (
award_id INT NOT NULL,
ceremony_id INT NOT NULL,
name VARCHAR(50) NOT NULL, -- Ex: Best Picture
CONSTRAINT PK_Awards PRIMARY KEY CLUSTERED (award_id)
)
CREATE TABLE Ceremonies (
ceremony_id INT NOT NULL,
name VARCHAR(50) NOT NULL, -- Ex: "Academy Awards"
nickname VARCHAR(50) NULL, -- Ex: "Oscars"
CONSTRAINT PK_Ceremonies PRIMARY KEY CLUSTERED (ceremony_id)
)
I didn't include Foreign Key constraints here, but hopefully they should be pretty obvious.
Anything's possible; that doesn't mean it's a good idea :)
Far better to normalize your structure and store types like so:
AwardTypes:
AwardTypeID
AwardTypeName
Movies:
MovieID
MovieName
MovieAwardType:
MovieID
AwardTypeID
You can serialize your data in Json format,store Json string, and deselialize on read. More sefer than using your own format
Data presentation does't have to be so close tied with phisical data organisation. Wouldn't it be bether to store these two data in two separate columns and then just do some kind of concatenation at the display time?
It is much less painfull to join data than to split it, if you happen to need just a screenplay, one day...
Please guide me if I'm on right track.
I'm trying to create database schema for Mobile Bill for a person X and how to define PK, FK for the table Bill_Detail_Lines.
Here are the assumptions:
Every customer will have a unique relationship number.
Bill_no will be unique as it is generated every month.
X can call to the same mobile no every month.
Account_no is associated with every mobile no and it doesn't change.
Schema:
table: Bill_Headers
Relationship_no - int, NOT NULL , PK
Bill_no - int, NOT NULL , PK
Bill_date - varchar(255), NOT NULL
Bill_charges - int, NOT NULL
table: Bill_Detail_Lines
Account_no - int, NOT NULL
Bill_no - int, NOT NULL , FK
Relationship_no - int, NOT NULL, FK
Phone_no - int, NOT NULL
Total_charges - int
table: Customers
Relationship_no - int, NOT NULL, PK
Customer_name - varchar(255)
Address_line_1 - varchar(255)
Address_line_2 - varchar(255)
Address_line_3 - varchar(255)
City - varchar(255)
State - varchar(255)
Country - varchar(255)
I would recommend having a primary key for Bill_Detail_Lines. If each line represents a total of all calls made to a given number, then the natural PK seems to be (Relationship_no, Bill_no, Phone_no), or maybe (Relationship_no, Bill_no, Account_no).
If each line instead represents a single call, then I would probably add a Line_no column and make the PK (Relationship_no, Bill_no, Line_no).
Yes, as for me, everything looks good.
I have to disagree, there's a couple of 'standards' which aren't being followed. Yes the design looks ok, but the naming convention isn't appropriate.
Firstly, table names should be singular (many people will disagree with this).
If you have a single int, PK on a table, the standard is to call it 'ID', thus you have "SELECT Customer.ID FROM Customer" - for instance. You also then fully qualify the FK columns, for instance: CustomerID on Bill_Headers instead of Relationship_no which you then have to check in the table definition to remember what it's related to.
Something I also always keep in mind, is to make the column header as clear and short as possible without obfuscating the name. For instance, "Bill_charges" on Bill_Headers could just be "Charges", as you're already on the Bill_Header(s) (<- damn that 's'), same goes for Date, but date could be a bit more descriptive, CreatedDate, LastUpdatedDate, etc...
Lastly, beware of hard-coding multiple columns where one would suffice, same other way around. Specifically I'm talking about:
Address_line_1 - varchar(255)
Address_line_2 - varchar(255)
Address_line_3 - varchar(255)
This will lead to headaches later. SQL does have the capability to store new line characters in a string, thus combining them to one "Address - varchar(8000)" would be easiest. Ideally this would be in a separate table, call it Customer_Address with int "CustomerID - int PK FK" column where you can enter specific information.
Remember, these are just suggestions as there's no single way of database design that everyone SHOULD follow. These are best practices, at the end of the day it's your decision to make.
There are a few mistakes:
Realtionship_no and Bill_no are int. Make sure that the entries are within the range of integer. It is better to take them as varchar() or char()
Bill_date should be in data type Date
In table Bill_Detail_Lines also, it is better to have Account_no as varchar() or char() because of the long account no. And the same goes with Phone_no.
Your Customers table is all fine except that you have taken varchar() size as 255 for City State and Country which is too large. You can work with smaller size also.
I have been using netbeans as a tool for my java, and i have a problem. I read this tutorial and then i tried to create a table using this SQL:
CREATE TABLE CUSTOMERS (
ID INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
FIRST_NAME VARCHAR(20),
LAST_NAME VARCHAR(30),
ADDRESS VARCHAR(30),
CITY VARCHAR(30),
STATE_ VARCHAR(30),
ZIP VARCHAR(15),
COUNTRY_ID INTEGER,
PHONE VARCHAR(15),
EMAIL_ADDRESS VARCHAR(50)
)ENGINE=INNODB;
When i tried to run it, I got this error message:
sql state 42X01 : Syntax error :
encountered "AUTO_INCREMENT" at line 2
column 29
and when i delete the AUTO_INCREMENT, another error:
detected ENGINE=INNODB;
can someone help me? Thanks.
You seem to be using MySQL syntax with another database engine. The parts it complained about are precisely the MySQL-specific ones.
my sugestion would be the following
CREATE TABLE CUSTOMERS
( ID INTEGER NOT NULL auto_increment,
FIRST_NAME VARCHAR(20),
LAST_NAME VARCHAR(30),
ADDRESS VARCHAR(30),
CITY VARCHAR(30),
STATE_ VARCHAR(30),
ZIP VARCHAR(15),
COUNTRY_ID INTEGER,
PHONE VARCHAR(15),
EMAIL_ADDRESS VARCHAR(50),
PRIMARY KEY (ID));
Dunno what the engine=innodb is for, have you tried without it?
The "engine=innodb" part specifies the database engine that gets used in the database. With MySQL you can specify different engines like "InnoDB", "MyISAM", etc. They have different properties and features - some allow foreign indexes, some do not. Some have different locking mechanisms, some have different atomicity/rollback properties. I don't know the details but if you need a really high-performance database setup you should investigate which engine is best for each type of table you're creating. Also, all my database experience has been with MySQL and I'm not sure if that's what you're using.
Been a long time but if anybody else stumbles on this like I did, a solution that worked for me is instead of using auto_increment, describe the ID column as
ID INTEGER GENERATED ALWAYS AS IDENTITY, WHATEVER VARCHAR(20), ETC ETC...