What the best way to self-document "codes" in a SQL based application? - sql

Q: Is there any way to implement self-documenting enumerations in "standard SQL"?
EXAMPLE:
Column: PlayMode
Legal values: 0=Quiet, 1=League Practice, 2=League Play, 3=Open Play, 4=Cross Play
What I've always done is just define the field as "char(1)" or "int", and define the mnemonic ("league practice") as a comment in the code.
Any BETTER suggestions?
I'd definitely prefer using standard SQL, so database type (mySql, MSSQL, Oracle, etc) should't matter. I'd also prefer using any application language (C, C#, Java, etc), so programming language shouldn't matter, either.
Thank you VERY much in advance!
PS:
It's my understanding that using a second table - to map a code to a description, for example "table playmodes (char(1) id, varchar(10) name)" - is very expensive. Is this necessarily correct?

The normal way is to use a static lookup table, sometimes called a "domain table" (because its purpose is to restrict the domain of a column variable.)
It's up to you to keep the underlying values of any enums or the like in sync with the values in the database (you might write a code generator to generates the enum from the domain table that gets invoked when the something in the domain table gets changed.)
Here's an example:
--
-- the domain table
--
create table dbo.play_mode
(
id int not null primary key clustered ,
description varchar(32) not null unique nonclustered ,
)
insert dbo.play_mode values ( 0 , "Quiet" )
insert dbo.play_mode values ( 1 , "LeaguePractice" )
insert dbo.play_mode values ( 2 , "LeaguePlay" )
insert dbo.play_mode values ( 3 , "OpenPlay" )
insert dbo.play_mode values ( 4 , "CrossPlay" )
--
-- A table referencing the domain table. The column playmode_id is constrained to
-- on of the values contained in the domain table playmode.
--
create table dbo.game
(
id int not null primary key clustered ,
team1_id int not null foreign key references dbo.team( id ) ,
team2_id int not null foreign key references dbo.team( id ) ,
playmode_id int not null foreign key references dbo.play_mode( id ) ,
)
go
Some people for reasons of "economy" might suggest using a single catch-all table for all such code, but in my experience, that ultimately leads to confusion. Best practice is a single small table for each set of discrete values.

add a foreign key to "codes" table.
the codes table would have the PK be the code value, add a string description column where you enter in the description of the value.
table: PlayModes
Columns: PlayMode number --primary key
Description string
I can't see this as being very expensive, databases are based on joining tables like this.

That information should be in database somewhere and not on comments.
So, you should have a table containing that codes and prolly a FK on your table to it.

I agree with #Nicholas Carey (+1): Static data table with two columns, say “Key” or “ID” and “Description”, with foreign key constraints on all tables using the codes. Often the ID columns are simple surrogate keys (1, 2, 3, etc., with no significance attached to the value), but when reasonable I go a step further and use “special” codes. Following are a few examples.
If the values are a sequence (say, Ordered, Paid, Processed, Shipped), I might use 1, 2, 3, 4, to indicate sequence. This can make things easier if you want to find all “up through” a give stages, such as all orders that have not yet been shipped (ID < 4). If you are into planning ahead, make them 10, 20, 30, 40; this will allow you to add values “in between” existing values, if/when new codes or statuses come along. (Yes, you cannot and should not try to anticipate everything and anything that might have to be done some day, but a bit of pre-planning like this can make some changes that much simpler.)
Keys/Ids are often integers (1 byte, 2 byte, 4 byte, whatever). There’s little cost to make them character values (1 char, 2 char, 3, char, 4 char). That’s character, not variable character. Done this way, you can have mnemonics on your codes, such as
O, P, R, S
Or, Pd, Pr, Sh
Ordr, Paid, Proc, Ship
…or whatever floats your boat. Done this way, I have found that it can save a lot of time when analyzing or debugging. You still want the lookup table, for relational integrity as well as a reminder for the more obscure codes.

Related

Append query error

My primary key of Specialization table is complex and consists of two columns: Specialization_id and Specialty_id(which is foreign key of Specialty table).
I am going to make insert query by creating a new SPECIALIZATION of an existing specialty.
I have some specializations in my table already, primary key for one specialty with id = 1 looks like (1,1) (2,1), with id = 2 looks like (1,2) (2,2).
Now I am going to add a new Specialization to Specialization table, but this Specialization should reference to existing Specialty.
My idea is
INSERT INTO Specialization ( Specialization_name, Specialization_id, Specialty_id )
VALUES( 'Network technologies','(SELECT Max(Specialization_id) FROM Specialization WHERE Specialty_id = '2')+1', '2');
I've tried with '' clauses and without it, still, I am getting errors.
What should I do here?
In a database like MySQL or ORACLE you should not programatically increment the primary key. Instead you should set the field to AUTOINCREMENT when you design the database.
That would change your INSERT statement to:
INSERT INTO Specialization (Specialization_name, Specialty_id)
VALUES('Network technologies', '2')
which is not only much easier, it is safer.
edited to add: Are you sure Specialty_id is a string? I would think it would be some kind of number.
Your main error is the quotes, since your subquery is quoted it will get processed as a string and not a query.
Also, you don't need to use a subquery if you don't have a main query:
INSERT INTO Specialization ( Specialization_name, Specialization_id, Specialty_id )
SELECT 'Network technologies' As a, Max(Specialization_id)+1 As b, 2 As c
FROM Specialization WHERE Specialty_id = 2
However, like dstudeba said, relying on autonumbers for creating new IDs is probably smarter.

Adding Row in existing table (SQL Server 2005)

I want to add another row in my existing table and I'm a bit hesitant if I'm doing the right thing because it might skew the database. I have my script below and would like to hear your thoughts about it.
I want to add another row for 'Jane' in the table, which will be 'SKATING" in the ACT column.
Table: [Emp_table].[ACT].[LIST_EMP]
My script is:
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
Will this do the trick?
Your statement looks ok. If the database has a problem with it (for example, due to a foreign key constraint violation), it will reject the statement.
If any of the fields in your table are numeric (and not varchar or char), just remove the quotes around the corresponding field. For example, if emp_cod and line_no are int, insert the following values instead:
('REG','EMP',45233,'2016-06-20 00:00:00:00',2,'SKATING','JANE')
Inserting records into a database has always been the most common reason why I've lost a lot of my hairs on my head!
SQL is great when it comes to SELECT or even UPDATEs but when it comes to INSERTs it's like someone from another planet came into the SQL standards commitee and managed to get their way of doing it implemented into the final SQL standard!
If your table does not have an automatic primary key that automatically gets generated on every insert, then you have to code it yourself to manage avoiding duplicates.
Start by writing a normal SELECT to see if the record(s) you're going to add don't already exist. But as Robert implied, your table may not have a primary key because it looks like a LOG table to me. So insert away!
If it does require to have a unique record everytime, then I strongly suggest you create a primary key for the table, either an auto generated one or a combination of your existing columns.
Assuming the first five combined columns make a unique key, this select will determine if your data you're inserting does not already exist...
SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND [EMP_COD] = wsEmpCod AND [DATE] = wsDate AND [LINE_NO] = wsLineno
The wsXXX declarations, you will have to replace them with direct values or have them DECLAREd earlier in your script.
If you ran this alone and recieved a value of 1 or more, then the data exists already in your table, at least those 5 first columns. A true duplicate test will require you to test EVERY column in your table, but it should give you an idea.
In the INSERT, to do it all as one statement, you can do this ...
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
WHERE (SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND
[EMP_COD] = wsEmpCod AND [DATE] = wsDate AND
[LINE_NO] = wsLineno) = 0
Just replace the wsXXX variables with the values you want to insert.
I hope that made sense.

I want to show how many of the alumni attended the social event

I want an output to show how many alumni attended the event
I got Event Table
CREATE TABLE EVENT
(EVENTID INTEGER CONSTRAINT EVENT_ID_PK PRIMARY KEY,
EDATEANDTIME VARCHAR2(20),
EVENUE VARCHAR2(30),
EAUDIENCE VARCHAR2(30),
EATTENDED VARCHAR2(30),
EVENT_ALUMNI INTEGER,
CONSTRAINT EVENT_ALUMNI_FK FOREIGN KEY (EVENT_ALUMNI) REFERENCES
ALUMNI(ALUMNIID));
Here is what I inserted in Event
INSERT INTO EVENT VALUES(25993, 'Jan 14, 2015', 'Concorde Hall', 'Tobias, Tucker, Felix, Nicole, Desiree, Taylor, Frant, Ifeoma, Forrest, Stewart, Cole, Arthur, Thomas, Bo, Lucian',
'Tobias, Tucker, Felix, Nicole, Desiree, Taylor, Frant, Ifeoma, Forrest, Stewart, Cole, Arthur, Thomas, Bo',17337);
INSERT INTO EVENT VALUES(23823, 'July 18 2015', 'Rochester Hotel', 'Joan, Thalia, Haleeda', 'Joan, Haleeda'
,19927);
And I have a View Statement to view how many attended
CREATE VIEW VIEWH AS SELECT ETYPE, EDATEANDTIME, COUNT (*) EATTENDED FROM EVENT
WHERE EDATEANDTIME LIKE '%2015%' AND ETYPE = 'Social'
GROUP BY ETYPE, EDATEANDTIME, EATTENDED;
Here is where I got problem. When i run the query, the output is I got only one who attended the event instead of like 10 or 15
I want to know where i went wrong.
There are several potential problems with your setup, I mentioned a couple in a Comment to your question.
Regarding the count specifically: your first problem is you have EATTENDED in the GROUP BY clause. Why? This almost surely means that each group is exactly one row.
Then your next problem is counting tokens out of a comma-separated string. One way is using regexp_count() as TheGameiswar has shown. (err... his/her solution has disappeared? anyway, I was saying...)
Another is to use length() and translate():
select ... , 1 + length(EATTENDED) - length(translate(EATTENDED, ',', '') as ATT_CNT ...
A couple of notes about this:
The length difference, just like regexp_count(), counts commas in the string. You add one to count tokens (you don't care how many commas there are, you care how many names are separated by commas). translate() works faster than regexp_count() (all regexp functions are very powerful, but slower, so they should be used only when they are really needed, IF performance is important); length doesn't add much overhead since in Oracle the length of VARCHAR2 is always calculated and stored as part of the internal representation of a string.
I gave a different alias to the resulting column; in your original query you use the same name, EATTENDED, for a column in your base table and as an alias for this numeric column in your view. You can do it, it's legal, but it is almost certain to cause problems in the future.
It is because COUNT gives number of "rows" in the result and not number of "values in column".
To give you a simple example
example1_table:
col1
--------
a
b
c
SELECT COUNT(col1) FROM example1_table; // The result is 3, as there are three rows
example2_table:
col1
--------
a,b,c
SELECT COUNT(col1) FROM example1_table; // The result is 1, as there is only one row (it is not about number of values in it)
Note:
As stated by #mathguy you could have used a better Database structure.
Instead of using only one table EVENT create two tables let they be: table1 table2 so you could avoid redundancy.
CREATE TABLE table1
(EVENTID INTEGER CONSTRAINT EVENT_ID_PK PRIMARY KEY,
EDATEANDTIME VARCHAR2(20),
EVENUE VARCHAR2(30),
EVENT_ALUMNI INTEGER,
CONSTRAINT tab1_FK FOREIGN KEY (EVENT_ALUMNI) REFERENCES
ALUMNI(ALUMNIID));
CREATE TABLE table2
(
EAUDIENCE VARCHAR2(30),
EATTENDED VARCHAR2(30),
tab2ID INTEGER
CONSTRAINT tab2_FK FOREIGN KEY (tab2ID) REFERENCES
tab1(EVENT_ALUMNI));
If you don't want to make any changes in your table, assuming there is always a comma present between two attendees names you could try:
CREATE VIEW VIEWH AS SELECT ETYPE, EDATEANDTIME, regexp_count(EATTENDED,',')+1 FROM EVENT WHERE EDATEANDTIME LIKE '%2015%' AND ETYPE = 'Social' GROUP BY ETYPE, EDATEANDTIME;

Is it possible to CREATE TABLE with a column that is a combination of other columns in the same table?

I know that the question is very long and I understand if someone doesn't have the time to read it all, but I really wish there is a way to do this.
I am writing a program that will read the database schema from the database catalog tables and automatically build a basic application with the information extracted from the system catalogs.
Many tables in the database can be just a list of items of the form
CREATE TABLE tablename (id INTEGER PRIMARY KEY, description VARCHAR NOT NULL);
so when a table has a column that references the id of tablename I just resolve the descriptions by querying it from the tablename table, and I display a list in a combo box with the available options.
There are some tables however that cannot directly have a description column, because their description would be a combination of other columns, lets take as an example the most important of those tables in my first application
CREATE TABLE bankaccount (
bankid INTEGER NOT NULL REFERENCES bank,
officeid INTEGER NOT NULL REFERENCES bankoffice,
crc INTEGER NOT NULL,
number BIGINT NOT NULL
);
this as many would know, would be the full account number for a bank account, in my country it's composed as follows
[XXXX][XXXX][XX][XXXXXXXXXX]
^ ^ ^ ^
bank id | crc account number
|
|_ bank office id
so that's the reason of the way my bankaccount table is structured as is.
Now, I would like to have the complete bank account number in a description column so I can display it in the application without giving a special treatment to this situation, since there are some other tables with similar situation, something like
CREATE TABLE bankaccount (
bankid INTEGER NOT NULL REFERENCES bank,
officeid INTEGER NOT NULL REFERENCES bankoffice,
crc INTEGER NOT NULL,
number BIGINT NOT NULL,
description VARCHAR DEFAULT bankid || '-' || officeid || '-' || crc || '-' || number
);
Which of course doesn't work since the following error is raised1
ERROR: cannot use column references in default expression
If there is any different approach that someone can suggest, please feel free to suggest it as an answer.
1 This is the error message given by PostgreSQL.
What you want is to create a view on your table. I'm more familiar with MySQL and SQLite, so excuse the differences. But basically, if you have table 'AccountInfo' you can have a view 'AccountInfoView' which is sort of like a 'stored query' but can be used like a table. You would create it with something like
CREATE VIEW AccountInfoView AS
SELECT *, CONCATENATE(bankid,officeid,crc,number) AS FullAccountNumber
FROM AccountInfo
Another approach is to have an actual FullAccountNumber column in your original table, and create a trigger that sets it any time an insert or update is performed on your table. This is usually less efficient though, as it duplicates storage and takes the performance hit when data are written instead of retrieved. Sometimes that approach can make sense, though.
What actually works, and I believe it's a very elegant solution is to use a function like this one
CREATE FUNCTION description(bankaccount) RETURNS VARCHAR AS $$
SELECT
CONCAT(bankid, '-', officeid, '-', crc, '-', number)
FROM
bankaccount this
WHERE
$1.bankid = this.bankid AND
$1.officeid = this.officeid AND
$1.crc = this.crc AND
$1.number = this.number
$$ LANGUAGE SQL STABLE;
which would then be used like this
SELECT bankaccount.description FROM bankaccount;
and hence, my goal is achieved.
Note: this solution works with PostgreSQL only AFAIK.

Optional Unique Constraints?

I am setting up a SaaS application that multiple clients will use to enter data. However, I have certain fields that Client A may want to force unique, Client B may want to allow dupes. Obviously if I am going to allow any one client to have dupes, the table may not have a unique constraint on it. The downside is that If I want to enforce a unique constraint for some clients, I will have to go about it in some other way.
Has anyone tackled a problem like this, and if so, what are the common solutions and or potential pitfalls to look out for?
I am thinking a trigger that checks for any possible unique flags may be the only way to enforce this correctly. If I rely on the business layer, there is no guarentee that the app will do a unique check before every insert.
SOLUTION:
First I considered the Unique Index, but ruled it out as they can not do any sort of joins or lookups, only express values. And I didn't want to modify the index every time a client was added or a client's uniqueness preference changed.
Then I looked into CHECK CONSTRAINTS, and after some fooling around, built one function to return true for both hypothetical columns that a client would be able to select as unique or not.
Here is the test tables, data, and function I used to
verify that a check constraint could do all that I wanted.
-- Clients Table
CREATE TABLE [dbo].[Clients](
[ID] [int] NOT NULL,
[Name] [varchar](50) NOT NULL,
[UniqueSSN] [bit] NOT NULL,
[UniqueVIN] [bit] NOT NULL
) ON [PRIMARY]
-- Test Client Data
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(1,'A Corp',0,0)
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(2,'B Corp',1,0)
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(3,'C Corp',0,1)
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(4,'D Corp',1,1)
-- Cases Table
CREATE TABLE [dbo].[Cases](
[ID] [int] IDENTITY(1,1) NOT NULL,
[ClientID] [int] NOT NULL,
[ClaimantName] [varchar](50) NOT NULL,
[SSN] [varchar](12) NULL,
[VIN] [varchar](17) NULL
) ON [PRIMARY]
-- Check Uniques Function
CREATE FUNCTION CheckUniques(#ClientID int)
RETURNS int -- 0: Ok to insert, 1: Cannot insert
AS
BEGIN
DECLARE #SSNCheck int
DECLARE #VinCheck int
SELECT #SSNCheck = 0
SELECT #VinCheck = 0
IF (SELECT UniqueSSN FROM Clients WHERE ID = #ClientID) = 1
BEGIN
SELECT #SSNCheck = COUNT(SSN) FROM Cases cs WHERE ClientID = #ClientID AND (SELECT COUNT(SSN) FROM Cases c2 WHERE c2.SSN = cs.SSN) > 1
END
IF (SELECT UniqueVIN FROM Clients WHERE ID = #ClientID) = 1
BEGIN
SELECT #VinCheck = COUNT(VIN) FROM Cases cs WHERE ClientID = #ClientID AND (SELECT COUNT(VIN) FROM Cases c2 WHERE c2.VIN = cs.VIN) > 1
END
RETURN #SSNCheck + #VinCheck
END
-- Add Check Constraint to table
ALTER TABLE Cases
ADD Constraint chkClientUniques CHECK(dbo.CheckUniques(ClientID) = 0)
-- Now confirm constraint using test data
-- Client A: Confirm that both duplicate SSN and VIN's are allowed
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(1, 'Alice', '111-11-1111', 'A-1234')
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(1, 'Bob', '111-11-1111', 'A-1234')
-- Client B: Confirm that Unique SSN is enforced, but duplicate VIN allowed
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(2, 'Charlie', '222-22-2222', 'B-2345') -- Should work
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(2, 'Donna', '222-22-2222', 'B-2345') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(2, 'Evan', '333-33-3333', 'B-2345') -- Should Work
-- Client C: Confirm that Unique VIN is enforced, but duplicate SSN allowed
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(3, 'Evan', '444-44-4444', 'C-3456') -- Should work
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(3, 'Fred', '444-44-4444', 'C-3456') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(3, 'Ginny', '444-44-4444', 'C-4567') -- Should work
-- Client D: Confirm that both Unique SSN and VIN are enforced
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Henry', '555-55-5555', 'D-1234') -- Should work
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Isaac', '666-66-6666', 'D-1234') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'James', '555-55-5555', 'D-2345') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Kevin', '555-55-5555', 'D-1234') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Lisa', '777-77-7777', 'D-3456') -- Should work
EDIT:
Had to modify the function a few times to catch NULL values in the dupe check, but all appears to be working now.
One approach is to use a CHECK constraint instead of a unique.
This CHECK constraint will be backed by a SCALAR function that will
take as input ClientID
cross-ref ClientID against a lookup table to see if duplicates are allowed (client.dups)
if not allowed, check for duplicates in the table
Something like
ALTER TABLE TBL ADD CONSTRAINT CK_TBL_UNIQ CHECK(dbo.IsThisOK(clientID)=1)
If you can identify rows in the table for each client, depending on your DBMS you could do something like this:
CREATE UNIQUE INDEX uq_some_col
ON the_table(some_column, other_column, client_id)
WHERE client_id IN (1,2,3);
(The above is valid for PostgreSQL and and I think SQL Server 2005)
The downsize is, that you will need to re-create that index each time a new client is added that requires those columns to be unique.
You will probably have some checks in the business layer as well, mostly to be able to show proper error messages.
That's perfectly possible now on Sql Server 2008(tested on my Sql Server 2008 box here):
create table a
(
cx varchar(50)
);
create unique index ux_cx on a(cx) where cx <> 'US';
insert into a values('PHILIPPINES'),('JAPAN'),('US'),('CANADA');
-- still ok to insert duplicate US
insert into a values('US');
-- will fail here
insert into a values('JAPAN');
Related article: http://www.ienablemuch.com/2010/12/postgresql-said-sql-server2008-said-non.html
There are a few things you can do with this, just depends on when/how you want to handle this.
You could use a CheckConstrain and modify this to do different lookups based on the client that was using it
You could use the business tier to handle this, but it will not protect you from raw database updates.
I personally have found that #1 can get too hard to maintain, especially if you get a high number of clients. I've found that doing it at the business level is a lot easier, and you can control it at a centralized location.
Thee are other options such as a table per client and others that could work as well, but these two are at least the most common that I've seen.
You could add a helper column. The column would be equal the the primary key for the application that allows duplicates, and a constant value for the other application. Then you create a unique constraint on UniqueHelper, Col1.
For the non-dupe client, it will have a constant in the helper column, forcing the column to be unique.
For the dupe column, the helper column is equal to the primary key, so the unique constraint is satisfied by that column alone. That application can add any number of dupes.
One possibility might be to use BEFORE INSERT and and BEFORE UPDATE triggers that can selectively enforce uniqueness.
And another possibility (kind of a kludge) would be to have an additional dummy field that is populated with unique values for one customer and duplicate values for the other customer. Then build a unique index on the combination of the dummy field and the visible field.
#Neil. I had asked in my comment above what your reasons were for putting all in the same table and you simply ignored that aspect of my comment and said everything was "plain and simple". Do you really want to hear the downsides of the conditional constraints approach in an Saas context?
You don't say how many different sets of rules this table in your Saas application may eventually need to incorporate. Will there be only two variations?
Performance is a consideration. Although each customer would have access to a dedicated conditional index|indices, fetching data from the base table could become slower and slower as the data from additional customers is added to it and the table grows.
If I were developing an Saas application, I'd go with dedicated transaction tables wherever appropriate. Customers could share standard tables like zipcodes, counties, and share even domain-specific tables like Products or Categories or WidgetTypes or whatever. I'd probably build dynamic SQL statements in stored procedures in which the correct table for the current customer was chosen and placed in the statement being constructed, e.g.
sql = "select * from " + DYNAMIC_ORDERS_TABLE + " where ... ")
If performance was taking a hit because the dynamic statements had to be compiled all the time, I might consider writing a dedicated stored procedure generator: sp_ORDERS_SELECT_25_v1.0 {where "25" is the id assigned to a particular user of the Saas app and there's a version suffix}.
You're going to have to use some dynamic SQL because the customer id must be appended to the WHERE-clause of every one of your ad hoc queries in order to take advantage of your conditional indexes:
sql = " select * from orders where ... and customerid = " + CURRENT_CUSTOMERID
Your conditional indexes involve your customer/user column and so that column must be made part of every query in order to ensure that only that customer's subset of rows are selected out of the table.
So, when all is said and done, you're really saving yourself the effort required to create a dedicated table and avoiding some dynamic SQL on your bread-and-butter queries. Writing dynamic SQL for bread-and-butter queries doesn't take much effort, and it's certainly less messy than having to manage multiple customer-specific indexes on the same shared table; and if you're writing dynamic SQL you could just as easily substitute the dedicated table name as append the customerid=25 clause to every query. The performance loss of dynamic SQL would be more than offset by the performance gain of dedicated tables.
P.S. Let's say your app has been running for a year or so and you have multiple customers and your table has grown large. You want to add another customer and their new set of customer-specific indexes to the large production table. Can you slipstream these new indexes and constraints during normal business hours or will you have to schedule the creation of these indexes for a time when usage is relatively light?
You don't make clear what benefit there is in having the data from separate universes mingled in the same table.
Uniqueness constraints are part of the entity definition and each entity needs its own table. I'd create separate tables.