Why can I not use a variable to name a new table?
As a beginning SQL project, I'm making a personal finance database. Each account will have a corresponding table in the database. There is also a table listing all the current accounts. See (simplified) code sample below:
CREATE TABLE accountList
(
[Id] INT NOT NULL PRIMARY KEY IDENTITY,
[Name] NCHAR(30) NOT NULL UNIQUE,
[Active] BIT NOT NULL
)
INSERT INTO accountList(name, active)
VALUES
('Bank_One_Checking', 1);
CREATE TABLE Bank_One_Checking
(
[Id] BIGINT NOT NULL PRIMARY KEY IDENTITY,
[payee] NCHAR(30) NOT NULL UNIQUE,
[category] NCHAR(30) NOT NULL UNIQUE,
[amount] INT NOT NULL DEFAULT 0.00
)
This code works. I want to set the account name to a variable (so it can be passed as a parameter to a stored procedure). See code below:
DECLARE #accountName nchar(30);
SET #accountName = 'Bank_One_Savings';
INSERT INTO accountList(name, active)
VALUES
(#accountName, 1);
CREATE TABLE #accountName
(
[Id] BIGINT NOT NULL PRIMARY KEY IDENTITY,
[payee] NCHAR(30) NOT NULL UNIQUE,
[category] NCHAR(30) NOT NULL UNIQUE,
[amount] INT NOT NULL DEFAULT 0.00
)
Line 6 in that code (CREATE TABLE #accountName) produces an error
Incorrect syntax near #accountName, expecting '.', 'ID', or 'QUOTEID'.
Why won't it insert the variable into the command?
SQL doesn't allow tables to be variables. You could use dynamic SQL, if you like, but I strongly recommend against it.
Your code has several flaws. You should learn not only to fix them but why they are wrong.
You need a "master" table, where AccountName is a column. Multiple tables with the same structure is almost always a sign of poor database design.
Strings should be designed using VARCHAR() or NVARCHAR(), unless they are short or known to be the same length (say an account number that is always 15 characters). Fixed-length strings just waste space.
I find it unlikely that a column named category would be unique in such a table. It seems to violate what uniqueness means.
Integers are not appropriate for monetary amounts in most of the world (use decimal or money). And, they shouldn't be initialized to constants with a decimal point.
Related
I'm creating a table and I need a check constraint to validate the posibles values given a string value. I'm creating this table:
CREATE TABLE cat_accident (
acc_type VARCHAR(30) NOT NULL CHECK(acc_type = 'Home accident' OR acc_type = 'Work accident'),
acc_descrip VARCHAR(30) NOT NULL
);
So basically I want to validate if acc_type is equal to Home accident, then acc_descrip can be or 'Intoxication' OR 'burns' OR 'Kitchen wound', OR if acc_type is equal to Work Accident, then acc_descrip can be OR 'freezing' OR 'electrocution'.
How do I write that constraint?
Use a CHECK constraint with a CASE expression:
CREATE TABLE cat_accident (
acc_type VARCHAR(30) NOT NULL,
acc_descrip VARCHAR(30) NOT NULL
CHECK(
CASE acc_type
WHEN 'Home accident' THEN acc_descrip IN ('Intoxication', 'burns', 'Kitchen wound')
WHEN 'Work accident' THEN acc_descrip IN ('freezing', 'electrocution')
END
)
);
See the demo.
I'd suggest implementing this with a lookup table:
CREATE TABLE l_accident_description(
description_id VARCHAR(5) PRIMARY KEY,
description_full VARCHAR(30) NOT NULL UNIQUE,
location VARCHAR(30)
);
INSERT INTO l_accident_description
(description_id,description_full,location)
VALUES
('INTOX','Intoxication','Home Accident'),
('BURNS','Burns','Home Accident'),
('K_WND','Kitchen wound','Home Accident'),
('FREEZ','Freezing','Work Accident'),
('ELECT','Electrocution','Work Accident');
That way you can encode the relationship you want to encode into cat_accident, but if the details ever change, it's only a matter of inserting/deleting/updating rows in your lookup table. This implementation has the added benefit that you're not storing as much data repetitively in your table (just a VARCHAR(5) code rather than a VARCHAR(30) string). The table construction then becomes (with added primary key):
CREATE TABLE cat_accident (
cat_accident_id PRIMARY KEY,
acc_descrip VARCHAR(5) NOT NULL REFERENCES l_accident_description(description_id)
);
Any time you wanted to know whether the accident Home/Work, this could be accomplished with a query joining the lookup table. Joining lookup tables is more in the spirit of good database construction, rather than hard-coding checks to tables that may easily change or grow more complex as the database grows.
In fact, the ideal solution might be to create two lookup tables here, with l_accident_description in turn referencing a location lookup, but for simplicity's sake I've shown how it might be accomplished with one.
When creating tables, I have generally created them with a couple extra columns that track change times and the corresponding user:
CREATE TABLE dbo.Object
(
ObjectId int NOT NULL IDENTITY (1, 1),
ObjectName varchar(50) NULL ,
CreateTime datetime NOT NULL,
CreateUserId int NOT NULL,
ModifyTime datetime NULL ,
ModifyUserId int NULL
) ON [PRIMARY]
GO
I have a new project now where if I continued with this structure I would have 6 additional columns on each table with this type of change tracking. A time column, user id column and a geography column. I'm now thinking that adding 6 columns to every table I want to do this on doesn't make sense. What I'm wondering is if the following structure would make more sense:
CREATE TABLE dbo.Object
(
ObjectId int NOT NULL IDENTITY (1, 1),
ObjectName varchar(50) NULL ,
CreateChangeId int NOT NULL,
ModifyChangeId int NULL
) ON [PRIMARY]
GO
-- foreign key relationships on CreateChangeId & ModifyChangeId
CREATE TABLE dbo.Change
(
ChangeId int NOT NULL IDENTITY (1, 1),
ChangeTime datetime NOT NULL,
ChangeUserId int NOT NULL,
ChangeCoordinates geography NULL
) ON [PRIMARY]
GO
Can anyone offer some insight into this minor database design problem, such as common practices and functional designs?
Where i work, we use the same construct as yours - every table has the following fields:
CreatedBy (int, not null, FK users table - user id)
CreationDate (datetime, not null)
ChangedBy (int, null, FK users table - user id)
ChangeDate (datetime, null)
Pro: easy to track and maintain; only one I/O operation (i'll come to that later)
Con: i can't think of any at the moment (well ok, sometimes we don't use the change fields ;-)
IMO the approach with the extra table has the problem, that you will have to reference somehow also the belonging table for every record (unless you only need the one direction Object to Tracking table). The approach also leads to more I/O database operations - for every insert or modify you will need to:
add entry to Table Object
add entry to Tracking Table and get the new Id
update Object Table entry with the Tracking Table Id
It would certainly make the application code that communicates with the DB a bit more complicated and error-prone.
I have the following SQL I trigger in a C# app.
All works well but the ID table doesn't auto increment. It creates the value of 1 for the first entry then will not allow other inserts due to not being able to create a unquie ID.
Here is the SQL:
CREATE TABLE of_mapplist_raw (
id integer PRIMARY KEY NOT NULL,
form_name varchar(200) NOT NULL,
form_revi varchar(200) NOT NULL,
source_map varchar(200),
page_num varchar(200) NOT NULL,
fid varchar(200) NOT NULL,
fdesc varchar(200) NOT NULL
)";
I'm sure its a schoolboy error at play here.
you need to specify its seed and increment.( plus , i dont think there is integer keyword ....)
id [int] IDENTITY(1,1) NOT NULL,
the first value is the seed
the second one is the delta between increases
A Question you might ask :
delta between increases ? why do i need that ? its always 1 ....??
well - yes and no. sometimes you want to leave a gap between rows - so you can later insert rows between... specially if its clustered index by that key....and speed is important... so you can pre-design it to leave gaps.
p.s. ill be glad to hear other scenarios from watchers.
You need to mention the Identity.
id int IDENTITY(1,1) NOT NULL
I have a table with the following definition:
CREATE TABLE url_tracker (
id int not null identity(1, 1),
active bit not null,
install_date int not null,
partner_url nvarchar(512) not null,
local_url nvarchar(512) not null,
public_url nvarchar(512) not null,
primary key(id)
);
And I have a requirement that these three URLs always be unique - any individual URL can appear many times, but the combination of the three must be unique (for a given day).
Initially I thought I could do this:
CREATE UNIQUE INDEX uniques ON url_tracker
(install_date, partner_url, local_url, public_url);
However this gives me back the warning:
Warning! The maximum key length is 900 bytes. The index 'uniques' has maximum
length of 3076 bytes. For some combination of large values, the insert/update
operation will fail.
Digging around I learned about the INCLUDE argument to CREATE INDEX, but according to this question converting the command to use INCLUDE will not enforce uniqueness on the URLs.
CREATE UNIQUE INDEX uniques ON url_tracker (install_date)
INCLUDE (partner_url, local_url, public_url);
How can I enforce uniqueness on several relatively large nvarchar fields?
Resolution
So from the comments and answers and more research I'm concluding I can do this:
CREATE TABLE url_tracker (
id int not null identity(1, 1),
active bit not null,
install_date int not null,
partner_url nvarchar(512) not null,
local_url nvarchar(512) not null,
public_url nvarchar(512) not null,
uniquehash AS HashBytes('SHA1',partner_url+local_url+public_url) PERSISTED,
primary key(id)
);
CREATE UNIQUE INDEX uniques ON url_tracker (install_date,uniquehash);
Thoughts?
I would make a computed column with the hash of the URLs, then make a unique index/constraint on that. Consider making the hash a persisted computed column. It shouldn't have to be recalculated after insertion.
Following the ideas from the conversation in the comments. Assuming that you can change the datatype of the URL to be VARCHAR(900) (or NVARCHAR(450) if you really think you need Unicode URLs) and be happy with the limitation on the length of the URL, this solution could work. This also assumes SQL Server 2008 or better. Please, always specify what version you're working with; sql-server is not specific enough, since solutions can vary greatly depending on the version.
Setup:
USE tempdb;
GO
CREATE TABLE dbo.urls
(
id INT IDENTITY(1,1) PRIMARY KEY,
url VARCHAR(900) NOT NULL UNIQUE
);
CREATE TABLE dbo.url_tracker
(
id INT IDENTITY(1,1) PRIMARY KEY,
active BIT NOT NULL DEFAULT 1,
install_date DATE NOT NULL DEFAULT CURRENT_TIMESTAMP,
partner_url_id INT NOT NULL REFERENCES dbo.urls(id),
local_url_id INT NOT NULL REFERENCES dbo.urls(id),
public_url_id INT NOT NULL REFERENCES dbo.urls(id),
CONSTRAINT unique_urls UNIQUE
(
install_date,partner_url_id, local_url_id, public_url_id
)
);
Insert some URLs:
INSERT dbo.urls(url) VALUES
('http://msn.com/'),
('http://aol.com/'),
('http://yahoo.com/'),
('http://google.com/'),
('http://gmail.com/'),
('http://stackoverflow.com/');
Now let's insert some data:
-- succeeds:
INSERT dbo.url_tracker(partner_url_id, local_url_id, public_url_id)
VALUES (1,2,3), (2,3,4), (3,4,5), (4,5,6);
-- fails:
INSERT dbo.url_tracker(partner_url_id, local_url_id, public_url_id)
VALUES(1,2,3);
GO
/*
Msg 2627, Level 14, State 1, Line 3
Violation of UNIQUE KEY constraint 'unique_urls'. Cannot insert duplicate key
in object 'dbo.url_tracker'. The duplicate key value is (2011-09-15, 1, 2, 3).
The statement has been terminated.
*/
-- succeeds, since it's for a different day:
INSERT dbo.url_tracker(install_date, partner_url_id, local_url_id, public_url_id)
VALUES('2011-09-01',1,2,3);
Cleanup:
DROP TABLE dbo.url_tracker, dbo.urls;
Now, if 900 bytes is not enough, you could change the URL table slightly:
CREATE TABLE dbo.urls
(
id INT IDENTITY(1,1) PRIMARY KEY,
url VARCHAR(2048) NOT NULL,
url_hash AS CONVERT(VARBINARY(32), HASHBYTES('SHA1', url)) PERSISTED,
CONSTRAINT unique_url UNIQUE(url_hash)
);
The rest doesn't have to change. And if you try to insert the same URL twice, you get a similar violation, e.g.
INSERT dbo.urls(url) SELECT 'http://www.google.com/';
GO
INSERT dbo.urls(url) SELECT 'http://www.google.com/';
GO
/*
Msg 2627, Level 14, State 1, Line 1
Violation of UNIQUE KEY constraint 'unique_url'. Cannot insert duplicate key
in object 'dbo.urls'. The duplicate key value is
(0xd111175e022c19f447895ad6b72ff259552d1b38).
The statement has been terminated.
*/
I have some txt files that contain tables with a mix of different records on them which have diferent types of values and definitons for columns. I was thinking of importing it into a table and running a query to separate the different record types since a identifier to this is listed in the first column. Is there a way to change the value type of a column in a query? since it will be a pain to treat all of them as text. If you have any other suggestions on how to solve this please let me know as well.
Here is an example of tables for 2 record types provided by the website where I got the data from
create table dbo.PUBACC_A2
(
Record_Type char(2) null,
unique_system_identifier numeric(9,0) not null,
ULS_File_Number char(14) null,
EBF_Number varchar(30) null,
spectrum_manager_leasing char(1) null,
defacto_transfer_leasing char(1) null,
new_spectrum_leasing char(1) null,
spectrum_subleasing char(1) null,
xfer_control_lessee char(1) null,
revision_spectrum_lease char(1) null,
assignment_spectrum_lease char(1) null,
pfr_status char(1) null
)
go
create table dbo.PUBACC_AC
(
record_type char(2) null,
unique_system_identifier numeric(9,0) not null,
uls_file_number char(14) null,
ebf_number varchar(30) null,
call_sign char(10) null,
aircraft_count int null,
type_of_carrier char(1) null,
portable_indicator char(1) null,
fleet_indicator char(1) null,
n_number char(10) null
)
Yes, you can do what you want. In ms access you can use any VBA functions and with some
IIF(FirstColumn="value1", CDate(SecondColumn), NULL) as DateValue,
IIF(FirstColumn="value2", CDec(SecondColumn), NULL) as DecimalValue,
IIF(FirstColumn="value3", CStr(SecondColumn), NULL) as StringValue
You can use all/any of the above in your SELECT.
EDIT:
From your comments it seems that you want to split them into different tables - importing as text should not be a problem in that case.
a)
After you import and get it in the initial table, create the proper table manually setting you can INSERT into the proper table.
b)
You could even do a make table query, but it might be faster to create it manually. If you do a make table query you have to be sure that you have casted the data into proper type in your select.
EDIT2:
As you updated the question showing the structure it becomes obvious that my suggestion above will not help directly.
If this is one time process you can follow HLGEM's solution. Here are some more details.
1) Import into a table with two columns - RecordType char(2), Rest memo
2) Now you can split the data (make two queries that select based on RecordType) and re-export the data (to be able to use access' import wizard)
3) Now you have two text files with proper structure which can be easily imported
I did this in my last job. You start with a staging table that has one column or two coulmns if your identifier is always the same length.
Then using the record identifier, you move the data to another set of staging tables, one for each type of record you have. This will be in columns for the data and can have the correct data types. Then you do any data cleaning you need to do. Then you insert into the real production table.
If you have a column defined as text, because it has both alphas and numbers, you'll only be able to query it as if it were text. Once you've separated out the different "types" of data into their own tables, you should be able to change the schema definition. Please comment here if I'm misunderstanding what you're trying to do.