I am using Oracle sql developer. I need to create a table:
EXTPROG (ActId, ActName)
ActId Varchar2(4),
ActName Varchar2(10)
are the two columns. ActId should always start with A
How to do this while creating table?
It depends on the functionality of ActId field.
If you want to perform post-insert validation (e.g., you import data from somewhere) — jarlh's advice fits perfectly. I would also take a look at DEFERRABLE option for constraints, it may be helpful in this case.
If ActId is a surrogate primary key (or should be generated somehow) and you want to fill it yourself, I would recommend you to create a sequence for these purposes:
insert
into EXTPROG (ActId, ActName)
values ('A' || lpad(to_char(EXTPROG_SEQ.NextVal), 3, '0'), 'SomeName')
In this case take into account relatively small size of your ActId field.
Related
I have a table where each row represents a key-value pair containing application-specific settings (such as the number of days to retain alerts, etc.). Each of these key-value pairs has a different range of valid values, so no single check constraint will apply equally to all rows. Some rows might need no validation at all and others might have string values needing special consideration. Is there some way I can create a check constraint on a per-row basis and have that constraint enforced when that row is updated?
I have attempted several times to achieve this, but have run into hurdles each time. Each attempt relies on the existence of a [Check] column on the table, wherein the constraint is defined for that row, similar to a normal table-based constraint (such as "((CAST Value AS INTEGER) <= 60)").
My first attempt was to create a normal check constraint that calls a user-defined function that reads the contents of the [Check] column (based on an identity value), performs a test of the constraint, and returns a true/false result, depending on whether or not the constraint is not violated. The problem with this approach is that it requires writing dynamic SQL to get the contents of the [Check] column as well as executing the code that it contains. But of course, dynamic SQL is not permitted in a function.
Next, I tried changing the function to a stored procedure, but it does not appear to be possible to call a stored procedure via a check constraint.
Finally, I tried creating a function AND a stored procedure, and calling the stored procedure from the function, but that is not permitted either.
The only way I know that will work is to write a huge, monolithic check constraint, containing checks for each row by identity value, all OR'ed together, like this:
(ID = 1 AND (CAST Value AS INTEGER) <= 100) OR (ID = 2 AND Value IN ('yes', 'no')) OR...
But that's an error-prone maintenance nightmare. Does anyone know of a way to accomplish what I want, without resorting to a monolithic check constraint?
As requested, consider the following table definition and some sample rows:
CREATE TABLE [dbo].[GenericSetting]
(
[ID] [INT] IDENTITY(1,1) NOT NULL,
[Name] [NVARCHAR](50) NOT NULL,
[Value] [NVARCHAR](MAX) NULL,
[Check] [NVARCHAR](MAX) NULL,
CONSTRAINT [PK_GenericSetting] PRIMARY KEY CLUSTERED ([ID])
)
INSERT INTO [dbo].[GenericSetting] ([Name],[Value],[Check]) VALUES ('AlertRetentionDays', 60, 'CAST(Value AS INTEGER) <= 60');
INSERT INTO [dbo].[GenericSetting] ([Name],[Value],[Check]) VALUES ('ExampleMode', 60, 'CAST(Value AS INTEGER) IN (1,2,5)');
You need to create trigger on this table to accomplish this task.
You would write such check constraints using conditional logic. For safety, this is actually a case where I would use case for boolean logic:
alter table eav add constraint chk_eav_value
check (case when attribute = 'amount'
then (case when try_convert(int, value) >= 0 then 'ok' else 'bad' end)
when attribute = 'us_zip'
then (case when value like '[0-9][0-9][0-9][0-9][0-9]' then 'ok' else 'bad' end)
when attribute like 'city'
then (case when value not like '%[a-zA-Z ']%' then 'ok' else 'bad'
else 'ok'
end) = 'ok');
Check constraints arent really designed to do that...best you vould do would be
a validation trigger on the table, which sucks, or
implement all your writes as stored procs themselves, and disable INS/UPD for the table otherwise. This also sucks.
At the risk of being a SO stereotype, you seem like you are putting business logic in the db layer...check constraints are great for static checks, but they werent really intended for much beyond that. I would be tempted to suggest looking upsteam (DA layer or common layer of your codebase) for solutions as well.
Yes, i went off a little there. Sorry in advance.
In theory, you can implement this kind of check via a scalar UDF. However, be aware that they can be quite troublesome in such scenarios.
Considering that you have already chosen EAV design approach for your system, adding UDF as a check constraint might degrade the overall performance from bad to worst.
I want to add another row in my existing table and I'm a bit hesitant if I'm doing the right thing because it might skew the database. I have my script below and would like to hear your thoughts about it.
I want to add another row for 'Jane' in the table, which will be 'SKATING" in the ACT column.
Table: [Emp_table].[ACT].[LIST_EMP]
My script is:
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
Will this do the trick?
Your statement looks ok. If the database has a problem with it (for example, due to a foreign key constraint violation), it will reject the statement.
If any of the fields in your table are numeric (and not varchar or char), just remove the quotes around the corresponding field. For example, if emp_cod and line_no are int, insert the following values instead:
('REG','EMP',45233,'2016-06-20 00:00:00:00',2,'SKATING','JANE')
Inserting records into a database has always been the most common reason why I've lost a lot of my hairs on my head!
SQL is great when it comes to SELECT or even UPDATEs but when it comes to INSERTs it's like someone from another planet came into the SQL standards commitee and managed to get their way of doing it implemented into the final SQL standard!
If your table does not have an automatic primary key that automatically gets generated on every insert, then you have to code it yourself to manage avoiding duplicates.
Start by writing a normal SELECT to see if the record(s) you're going to add don't already exist. But as Robert implied, your table may not have a primary key because it looks like a LOG table to me. So insert away!
If it does require to have a unique record everytime, then I strongly suggest you create a primary key for the table, either an auto generated one or a combination of your existing columns.
Assuming the first five combined columns make a unique key, this select will determine if your data you're inserting does not already exist...
SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND [EMP_COD] = wsEmpCod AND [DATE] = wsDate AND [LINE_NO] = wsLineno
The wsXXX declarations, you will have to replace them with direct values or have them DECLAREd earlier in your script.
If you ran this alone and recieved a value of 1 or more, then the data exists already in your table, at least those 5 first columns. A true duplicate test will require you to test EVERY column in your table, but it should give you an idea.
In the INSERT, to do it all as one statement, you can do this ...
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
WHERE (SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND
[EMP_COD] = wsEmpCod AND [DATE] = wsDate AND
[LINE_NO] = wsLineno) = 0
Just replace the wsXXX variables with the values you want to insert.
I hope that made sense.
I know that the question is very long and I understand if someone doesn't have the time to read it all, but I really wish there is a way to do this.
I am writing a program that will read the database schema from the database catalog tables and automatically build a basic application with the information extracted from the system catalogs.
Many tables in the database can be just a list of items of the form
CREATE TABLE tablename (id INTEGER PRIMARY KEY, description VARCHAR NOT NULL);
so when a table has a column that references the id of tablename I just resolve the descriptions by querying it from the tablename table, and I display a list in a combo box with the available options.
There are some tables however that cannot directly have a description column, because their description would be a combination of other columns, lets take as an example the most important of those tables in my first application
CREATE TABLE bankaccount (
bankid INTEGER NOT NULL REFERENCES bank,
officeid INTEGER NOT NULL REFERENCES bankoffice,
crc INTEGER NOT NULL,
number BIGINT NOT NULL
);
this as many would know, would be the full account number for a bank account, in my country it's composed as follows
[XXXX][XXXX][XX][XXXXXXXXXX]
^ ^ ^ ^
bank id | crc account number
|
|_ bank office id
so that's the reason of the way my bankaccount table is structured as is.
Now, I would like to have the complete bank account number in a description column so I can display it in the application without giving a special treatment to this situation, since there are some other tables with similar situation, something like
CREATE TABLE bankaccount (
bankid INTEGER NOT NULL REFERENCES bank,
officeid INTEGER NOT NULL REFERENCES bankoffice,
crc INTEGER NOT NULL,
number BIGINT NOT NULL,
description VARCHAR DEFAULT bankid || '-' || officeid || '-' || crc || '-' || number
);
Which of course doesn't work since the following error is raised1
ERROR: cannot use column references in default expression
If there is any different approach that someone can suggest, please feel free to suggest it as an answer.
1 This is the error message given by PostgreSQL.
What you want is to create a view on your table. I'm more familiar with MySQL and SQLite, so excuse the differences. But basically, if you have table 'AccountInfo' you can have a view 'AccountInfoView' which is sort of like a 'stored query' but can be used like a table. You would create it with something like
CREATE VIEW AccountInfoView AS
SELECT *, CONCATENATE(bankid,officeid,crc,number) AS FullAccountNumber
FROM AccountInfo
Another approach is to have an actual FullAccountNumber column in your original table, and create a trigger that sets it any time an insert or update is performed on your table. This is usually less efficient though, as it duplicates storage and takes the performance hit when data are written instead of retrieved. Sometimes that approach can make sense, though.
What actually works, and I believe it's a very elegant solution is to use a function like this one
CREATE FUNCTION description(bankaccount) RETURNS VARCHAR AS $$
SELECT
CONCAT(bankid, '-', officeid, '-', crc, '-', number)
FROM
bankaccount this
WHERE
$1.bankid = this.bankid AND
$1.officeid = this.officeid AND
$1.crc = this.crc AND
$1.number = this.number
$$ LANGUAGE SQL STABLE;
which would then be used like this
SELECT bankaccount.description FROM bankaccount;
and hence, my goal is achieved.
Note: this solution works with PostgreSQL only AFAIK.
Q: Is there any way to implement self-documenting enumerations in "standard SQL"?
EXAMPLE:
Column: PlayMode
Legal values: 0=Quiet, 1=League Practice, 2=League Play, 3=Open Play, 4=Cross Play
What I've always done is just define the field as "char(1)" or "int", and define the mnemonic ("league practice") as a comment in the code.
Any BETTER suggestions?
I'd definitely prefer using standard SQL, so database type (mySql, MSSQL, Oracle, etc) should't matter. I'd also prefer using any application language (C, C#, Java, etc), so programming language shouldn't matter, either.
Thank you VERY much in advance!
PS:
It's my understanding that using a second table - to map a code to a description, for example "table playmodes (char(1) id, varchar(10) name)" - is very expensive. Is this necessarily correct?
The normal way is to use a static lookup table, sometimes called a "domain table" (because its purpose is to restrict the domain of a column variable.)
It's up to you to keep the underlying values of any enums or the like in sync with the values in the database (you might write a code generator to generates the enum from the domain table that gets invoked when the something in the domain table gets changed.)
Here's an example:
--
-- the domain table
--
create table dbo.play_mode
(
id int not null primary key clustered ,
description varchar(32) not null unique nonclustered ,
)
insert dbo.play_mode values ( 0 , "Quiet" )
insert dbo.play_mode values ( 1 , "LeaguePractice" )
insert dbo.play_mode values ( 2 , "LeaguePlay" )
insert dbo.play_mode values ( 3 , "OpenPlay" )
insert dbo.play_mode values ( 4 , "CrossPlay" )
--
-- A table referencing the domain table. The column playmode_id is constrained to
-- on of the values contained in the domain table playmode.
--
create table dbo.game
(
id int not null primary key clustered ,
team1_id int not null foreign key references dbo.team( id ) ,
team2_id int not null foreign key references dbo.team( id ) ,
playmode_id int not null foreign key references dbo.play_mode( id ) ,
)
go
Some people for reasons of "economy" might suggest using a single catch-all table for all such code, but in my experience, that ultimately leads to confusion. Best practice is a single small table for each set of discrete values.
add a foreign key to "codes" table.
the codes table would have the PK be the code value, add a string description column where you enter in the description of the value.
table: PlayModes
Columns: PlayMode number --primary key
Description string
I can't see this as being very expensive, databases are based on joining tables like this.
That information should be in database somewhere and not on comments.
So, you should have a table containing that codes and prolly a FK on your table to it.
I agree with #Nicholas Carey (+1): Static data table with two columns, say “Key” or “ID” and “Description”, with foreign key constraints on all tables using the codes. Often the ID columns are simple surrogate keys (1, 2, 3, etc., with no significance attached to the value), but when reasonable I go a step further and use “special” codes. Following are a few examples.
If the values are a sequence (say, Ordered, Paid, Processed, Shipped), I might use 1, 2, 3, 4, to indicate sequence. This can make things easier if you want to find all “up through” a give stages, such as all orders that have not yet been shipped (ID < 4). If you are into planning ahead, make them 10, 20, 30, 40; this will allow you to add values “in between” existing values, if/when new codes or statuses come along. (Yes, you cannot and should not try to anticipate everything and anything that might have to be done some day, but a bit of pre-planning like this can make some changes that much simpler.)
Keys/Ids are often integers (1 byte, 2 byte, 4 byte, whatever). There’s little cost to make them character values (1 char, 2 char, 3, char, 4 char). That’s character, not variable character. Done this way, you can have mnemonics on your codes, such as
O, P, R, S
Or, Pd, Pr, Sh
Ordr, Paid, Proc, Ship
…or whatever floats your boat. Done this way, I have found that it can save a lot of time when analyzing or debugging. You still want the lookup table, for relational integrity as well as a reminder for the more obscure codes.
I have a 'users' table with two columns, 'email' and 'new_email'. I need:
A case-insensitive uniqueness constraint covering both columns - i.e., if "Bob#Example.com" appears in one row's 'email' column, then inserting "bob#example.com" into another row's (or even the same row's) 'new_email' column should fail.
Fast case-insensitive searching for a given email address in either the 'email' or 'new_email' fields - i.e. find the row where the new_email OR email is "Bob#example.com", case-insensitive.
I know that I could do this more easily by creating a related 'emails' table, but I'm expecting to be looking up users in this table (by primary key) from several applications, and I'd like to avoid duplicating the join logic in various places to also retrieve their emails. So I think some kind of expression index would be best, if that's possible.
If this isn't possible, I suppose my next best option would be to create a view that the other applications could use to easily fetch a user's emails along with their other information, but I'm not sure how to do that either.
I'm using Postgres 8.4. Thank you!
I think you'll have to use a trigger to enforce your cross-column uniqueness constraint. If you add unique indexes on each column and then a trigger something like this (untested off the top of my head code):
CREATE FUNCTION no_dups_allowed() RETURNS trigger AS $$
DECLARE
r ROW;
BEGIN
SELECT 1 INTO r
FROM users
WHERE LOWER(email) = LOWER(NEW.email_new)
OR LOWER(email_new) = LOWER(NEW.email);
IF FOUND THEN
-- Found a duplicate so it is time for a hissy fit!
RAISE 'Duplicate email address found' USING ERRCODE = 'unique_violation';
END;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
You'd want something like that as a BEFORE INSERT and BEFORE UPDATE trigger. That trigger would take care of catching cross-column duplicates and the unique indexes would take care of in-column duplicates.
Some useful references:
FOUND
RAISE
Triggers
Trigger Procedures
You'll want the individual indexes for your queries anyway and using the uniqueness half of the indexes simplifies your trigger by leaving it to only deal with the cross-column part; if you try to do it all in the trigger, then you'll have to watch out for updating a row without really changing the email or email_new columns.
For the querying half, you could create a view that used a UNION to combine the two columns. You could also create a function to merge the user's email addresses into one list. Hard to say which would be best without know more details of these other queries but I suspect that fixing all the other queries to know about email and email_new would be the best approach; you'll have to update all the other queries to use the view or function anyway so why build a view or function at all?
No need for triggers. Try this:
create table et (email text, email2 text);
create unique index et_u on et (coalesce(lower(email),lower(email2)));
insert into et (email,email2) values ('scott#gmail.com',NULL);
insert into et (email,email2) values ('scott#gmail.com',NULL);
ERROR: duplicate key value violates unique constraint "et_u"
insert into et (email,email2) values (NULL,'scott#gmail.com');
ERROR: duplicate key value violates unique constraint "et_u"
insert into et (email,email2) values (NULL,'Scott#gmail.com');
ERROR: duplicate key value violates unique constraint "et_u"