The problem is:
We have taken over a website which has an active member community. We've been given the application and database dump and have the site running on a new server successfully and the DNS has been switched.
The problem is that database has come out of sync in the time it took to get the files to us and the DNS switched over. Now that the DNS has switched and there is no chance of the database going out of sync, we've been handed members2 which is the table from the original server with the extra data.
Both tables look like this
`idmembers` int(10) unsigned NOT NULL auto_increment,
`firstName` varchar(20) default NOT NULL,
`lastName` varchar(20) default NOT NULL,
`email` varchar(255) default NOT NULL,
`date` varchar(10) default '0',
`source` varchar(50) default 'signup'
PRIMARY KEY (`idmembers`),
UNIQUE KEY `email` (`email`)
So the first table is called members1 and is the live database, which is missing a load of members from members2. I need to merge them both together keeping members1 as it is and allowing unique emails from members2 to be inserted into members1.
I am presuming that there is some SQL to do this but I have no idea what it could be.
My second and less preferable approach would be to use a tool like PhpMyAdmin to export all the records from members2 after a certain date and reimport them into members1 but the problem is they all export from members2 with an idmembers that conflict with members1 (as an autoincrement is used in both)
If I understand your question correctly, there are two separate issues here:
Adding completely new member records from members2 into members1
Updating the email field in members1, if that got changed in members2
As for the first case, you should be able do something like:
INSERT INTO members1 ('idmembers', 'firstname', etc.)
SELECT 'idmembers', 'firstname', etc.
FROM members2
WHERE idmembers NOT IN (SELECT idmembers FROM members1)
As for the second case, something like:
UPDATE members1 m1 LEFT JOIN members2 m2
ON m1.idmembers = m2.idmembers
SET m1.idmembers = m2.idmembers
WHERE m2.idmembers IS NOT NULL AND m2.idmembers != m1.idmembers
(Note1: Both statements constructed 'ad hoc' and untested!)
(Note2: Both statements assume that the primary key 'idmember' did not change during migration of members1! If that happened, these queries will not work.)
(Note3: If you encounter the 'different idmember keys' problem from Note2, you can still use the queries, but change the comparison and join operations to use the email field. But then you'd have to execute the second query first to prevent duplicates)
The most important suggestion is to do this on a copy of your database, not the live database, until you are certain the process results in the correcting merging!
First you should check to see if there are any rows in members2 with duplicate email addresses that already exist in members1:
SELECT members2.*
FROM members1 JOIN members2 USING (email);
If there are any (hopefully it'll be few), fix them up manually, or delete each row that is really a duplicate account of a person who already has an account in members1 (keep backup data of course).
If there's any other cases of redundant member accounts that should be considered duplicates and not inserted as new members, you may have to handle that manually. This is an example of a broader problem of database cleanup or de-duping that usually can't be automated fully.
You can copy rows from members2 into members1 while generating new id values like this:
INSERT INTO members1 (`firstName`, `lastName`, `email`, `date`, `source`)
SELECT `firstName`, `lastName`, `email`, `date`, `source`
FROM members2;
Yes, you have to name all the columns. By omitting idmembers from that query, that column will use its default behavior which is to generate a new id value.
You didn't say you need to update other tables that reference these new members by their id. If so, you should create a new table to map the members2 id to the new number generated as you import them into members1. You'll have to follow #ijclarkson's advice of inserting the members one at a time, so you can note the new id generated.
SELECT * FROM members2;
-- loop over results in a script:
INSERT INTO members1 (`firstName`, `lastName`, `email`, `date`, `source`)
VALUES (?, ?, ?, ?, ?);
INSERT INTO members_id_map (idmembers1, idmembers2)
VALUES (LAST_INSERT_ID(), ?); -- use idmembers from the query on members2
-- end loop
Just write a quick porting script which SELECTS the fields that are missing from "members1" and then does an INSERT for each one in the "members2" table.
You might have to do some checking if you require a unique email address, and you think there might be duplicates.
Related
Is there a way in Informix (v12 or higher) to retrieve the name of the current SAVEPOINT?
In Oracle there is something similar: You can name the transaction using SET TRANSACTION NAME and then select the transaction name from v$transaction:
SELECT name
FROM v$transaction
WHERE xidusn
|| '.'
|| xidslot
|| '.'
|| xidsqn = DBMS_TRANSACTION.LOCAL_TRANSACTION_ID;
That is not very straightforward, but it does the trick. Effectively we can use that to have a transaction scoped variable (yes, that is ugly, but it works for years now).
We have a mechanism based on this and would like to port that to Informix. Is there a way to do that?
Of course, if there is a different mechanism providing transaction scoped variables (so DEFINE GLOBAL is not what we are looking for), that would be helpful, too, but I doubt, there is one.
Thank you all for your comments so far.
Let me show the solution I have come up with. It is just a work in progress idea, but I hope it will lead somewhere:
I will need a "audit_lock" table which always contains a record for the current transaction carrying information about the current transaction, especially a username and a unique transaction_id (UUID or similar). That row will be inserted on starting the transaction and deleted before committing it.
Then I have a generic audit_trail table containing the audited information.
All audited tables fill the generic audit trail table using triggers, serializing each audited column into a separate record of the generic audit trail table.
The audit_lock and the audit_trail table need to use row locking. Also to avoid read locks on the audit_lock table we need to set the isolation level to COMMITTED READ LAST COMMITTED. If your use case does not support that, the suggested pattern does not work.
Here's the DDL:
CREATE TABLE audit_lock
(
transaction_id varchar(40) primary key,
username varchar(40)
);
alter table audit_lock
lock mode(ROW);
CREATE TABLE audit_trail
(
id serial primary key,
tablename varchar(255) NOT NULL,
record_id numeric(10) NOT NULL,
username varchar(40) NOT NULL,
transaction_id varchar(40) NOT NULL,
changed_column_name varchar(40),
old_value varchar(40),
new_value varchar(40),
operation varchar(40) NOT NULL,
operation_date datetime year to second NOT NULL
);
alter table audit_trail
lock mode(ROW);
Now we need to have the audited table:
CREATE TABLE audited_table
(
id serial,
somecolumn varchar(40)
);
And the table has an insert trigger writing into the audit_trail:
CREATE PROCEDURE proc_trigger_audit_audited_table ()
REFERENCING OLD AS o NEW AS n FOR audited_table;
INSERT INTO audit_trail
(
tablename,
record_id,
username,
transaction_id,
changed_column_name,
old_value,
new_value,
operation,
operation_date
)
VALUES
(
'audited_table',
n.id,
(SELECT username FROM audit_lock),
(SELECT transaction_id FROM audit_lock),
'somecolumn',
'',
n.somecolumn,
'INSERT',
sysdate
);
END PROCEDURE;
CREATE TRIGGER audit_insert_audited_table INSERT ON audited_table REFERENCING NEW AS post
FOR EACH ROW(EXECUTE PROCEDURE proc_trigger_audit_audited_table() WITH TRIGGER REFERENCES);
Now let's use that: First the caller of the transaction needs to generate a transaction_id for himself, maybe using a UUID generation mechanism. In the example below the transaction_id is simply '4711'.
BEGIN WORK;
SET ISOLATION TO COMMITTED READ LAST COMMITTED; --should be set globally
-- Issue the generation of the audit_lock entry at the beginnig of each transaction
insert into audit_lock (transaction_id, username) values ('4711', 'userA');
-- Is it there?
select * from audit_lock;
-- do data manipulation stuff
insert into audited_table (somecolumn) values ('valueA');
-- Issue that at the end of each transaction
delete from audit_lock
where transaction_id = '4711';
commit;
In a quick test, all of this worked even in simultaneaous transactions. Of course, that still needs a lot of work and testing, but I currently hope that path is feasible.
Let me also add a little bit more info on the other approach we are using in Oracle:
In Oracle we are (ab)using the transaction name, to store exactly the information that in the suggestion above is stored in the audit_lock table.
The rest is the same as above. The triggers work perfectly in that specific application, even though there are of course a lot of scenarios for other applications, where putting insert, delete and update triggers on each table generating records for each changed column in the table would be nuts. In our application it works perfectly for ten years now and it has no mentionable performance impact on the way the application is used.
In the java application server all code blocks, that are changing data, start with setting the transaction name first and then do loads of changes to various tables, that might be issuing all these triggers. All of these are running in the same transaction and since that has a transaction name which contains the application user, the triggers can write that information to the audit trail table.
I know there are other approaches to the problem, and you could even do that with hibernate features only, but our approach allows us to enforce some consistency through the database (NOT NULL constraint in the audit trail table on the username). Since everything is done via triggers, we can let those fail, if the transaction name is not containing the user (by requiring it to be in a specific format). If there any other portions of the application, other applications or ignorant administrators trying to issue updates to the audited tables without respecting to set the transaction name to the specific format, those updates will fail. This makes updates to the audited tables, that do not generate the required audit table entries harder (certainly not impossible, a ill willing admin can do anything, of course).
So all of you that are cringing now, let me quote Luis: Might seem like a terrible idea, but I have my use case ;)
The idea of #Luís to creating a specific table in each transaction to store the information causes a locking issue in systables. Let's call that "transaction info table". That idea did not cross my mind, since DDL causes commits in Oracle. So I tried that in Informix but if I try to create a table called "tblX" in two simultaneaous transactions, the second transaction get's a locking exception:
Cannot update system catalog (systables). [SQL State=IX000, DB Errorcode=-312]
Next: ISAM error: key value locked [SQL State=IX000, DB Errorcode=-144]
But letting all transactions use the same table as above works, as far as I tested it right now.
I want to add another row in my existing table and I'm a bit hesitant if I'm doing the right thing because it might skew the database. I have my script below and would like to hear your thoughts about it.
I want to add another row for 'Jane' in the table, which will be 'SKATING" in the ACT column.
Table: [Emp_table].[ACT].[LIST_EMP]
My script is:
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
Will this do the trick?
Your statement looks ok. If the database has a problem with it (for example, due to a foreign key constraint violation), it will reject the statement.
If any of the fields in your table are numeric (and not varchar or char), just remove the quotes around the corresponding field. For example, if emp_cod and line_no are int, insert the following values instead:
('REG','EMP',45233,'2016-06-20 00:00:00:00',2,'SKATING','JANE')
Inserting records into a database has always been the most common reason why I've lost a lot of my hairs on my head!
SQL is great when it comes to SELECT or even UPDATEs but when it comes to INSERTs it's like someone from another planet came into the SQL standards commitee and managed to get their way of doing it implemented into the final SQL standard!
If your table does not have an automatic primary key that automatically gets generated on every insert, then you have to code it yourself to manage avoiding duplicates.
Start by writing a normal SELECT to see if the record(s) you're going to add don't already exist. But as Robert implied, your table may not have a primary key because it looks like a LOG table to me. So insert away!
If it does require to have a unique record everytime, then I strongly suggest you create a primary key for the table, either an auto generated one or a combination of your existing columns.
Assuming the first five combined columns make a unique key, this select will determine if your data you're inserting does not already exist...
SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND [EMP_COD] = wsEmpCod AND [DATE] = wsDate AND [LINE_NO] = wsLineno
The wsXXX declarations, you will have to replace them with direct values or have them DECLAREd earlier in your script.
If you ran this alone and recieved a value of 1 or more, then the data exists already in your table, at least those 5 first columns. A true duplicate test will require you to test EVERY column in your table, but it should give you an idea.
In the INSERT, to do it all as one statement, you can do this ...
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
WHERE (SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND
[EMP_COD] = wsEmpCod AND [DATE] = wsDate AND
[LINE_NO] = wsLineno) = 0
Just replace the wsXXX variables with the values you want to insert.
I hope that made sense.
I have a table, something like:
create table state {foo int not null, bar int not null, baz varchar(32)};
create unique index on state(foo,bar);
I'd like to lock for a unique record in this table. However, if there's no existing record I'd like to prevent anyone else from inserting a record, but without inserting myself.
I'd use "FOR UPDATE WITH RS USE AND KEEP EXCLUSIVE LOCKS" but that only seems to work if the record exists.
A) You can let DB2 create every ID number. Let's say you have defined your Customer table
CREATE TABLE Customers
( CustomerID Int NOT NULL
GENERATED ALWAYS AS IDENTITY
PRIMARY KEY
, Name Varchar(50)
, Billing_Type Char(1)
, Balance Dec(9,2) NOT NULL DEFAULT
);
Insert rows without specifying the CustomerID, since DB2 will always produce the value for you.
INSERT INTO Customers
(Name, Billing_Type)
VALUES
(:cname, :billtype);
If you need to know what the last value assigned in your session was, you can then use the IDENTITY_VAL_LOCAL() function.
B) In my environment, I generally specify GENERATED BY DEFAULT. This is in part due to the nature of our principle programming language, ILE RPG-IV, where developers have traditionally to allowed the compiler to use the entire record definition. This leads me to I can tell everyone to use a sequence to generate ID values for a given table or set of tables.
You can grant select to only you, but if there are others with secadm or other privileges, they could insert.
You can do something with a trigger, something like check the current session, and if the user is your user, then it inserts the row.
if (SESSION_USER <> 'Alex) then
rollback -- or generate an exception
end if;
It seems that you also want to keep just one row, then, you can control that also in a trigger:
select count(0) into value from state
if (value > 1) then
rollback -- or generate an exception
end if;
I am running a lean startup. While we have good safety against hardware failures, we often need to manipulate customer data by running SQL queries directly on the live database. It is so easy to make mistakes (like thinking that you are running a delete query on the test database, while you were actually running the query directly on the live database). I am wondering if there are something I can do to create "undo" functionality for every SQL query that is run against the database (also if someone connects directly via PSQL). I do not care about performance, nor storage space as secure data is my only concern.
It would be excellent if the UNDO function had the possibility to not undo some specific transactions in the transaction log, so going backwards would not interfere with correct transactions, like new customers signing up. We have very low volumes of such transactions so they can be looked through manually without problem.
Well, I think the only option you have is to create a history table.
I've done this with the help of triggers.
At every update you insert one row for every value that changed.
My history table look like this (MySql):
CREATE TABLE IF NOT EXISTS `history` (
`id_history` INT NOT NULL AUTO_INCREMENT ,
`event` ENUM('INSERT', 'UPDATE', 'DELETE') NOT NULL ,
`table` VARCHAR(255) NOT NULL ,
`id` INT NOT NULL ,
`field` VARCHAR(255) NULL ,
`old_value` TEXT NULL ,
`datetime` DATETIME NOT NULL ,
PRIMARY KEY (`id_history`);
And the trigger:
CREATE TRIGGER test_table_update BEFORE UPDATE ON test_table
FOR EACH ROW
BEGIN
IF OLD.value1 != NEW.value1 THEN
INSERT INTO history VALUES(NULL, 'UPDATE', 'test_table', OLD.id_test_table, 'value1', OLD.value1, NOW());
END IF;
...
END; //
With this information you can undo every change and also set it back to a specific date.
I am setting up a SaaS application that multiple clients will use to enter data. However, I have certain fields that Client A may want to force unique, Client B may want to allow dupes. Obviously if I am going to allow any one client to have dupes, the table may not have a unique constraint on it. The downside is that If I want to enforce a unique constraint for some clients, I will have to go about it in some other way.
Has anyone tackled a problem like this, and if so, what are the common solutions and or potential pitfalls to look out for?
I am thinking a trigger that checks for any possible unique flags may be the only way to enforce this correctly. If I rely on the business layer, there is no guarentee that the app will do a unique check before every insert.
SOLUTION:
First I considered the Unique Index, but ruled it out as they can not do any sort of joins or lookups, only express values. And I didn't want to modify the index every time a client was added or a client's uniqueness preference changed.
Then I looked into CHECK CONSTRAINTS, and after some fooling around, built one function to return true for both hypothetical columns that a client would be able to select as unique or not.
Here is the test tables, data, and function I used to
verify that a check constraint could do all that I wanted.
-- Clients Table
CREATE TABLE [dbo].[Clients](
[ID] [int] NOT NULL,
[Name] [varchar](50) NOT NULL,
[UniqueSSN] [bit] NOT NULL,
[UniqueVIN] [bit] NOT NULL
) ON [PRIMARY]
-- Test Client Data
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(1,'A Corp',0,0)
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(2,'B Corp',1,0)
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(3,'C Corp',0,1)
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(4,'D Corp',1,1)
-- Cases Table
CREATE TABLE [dbo].[Cases](
[ID] [int] IDENTITY(1,1) NOT NULL,
[ClientID] [int] NOT NULL,
[ClaimantName] [varchar](50) NOT NULL,
[SSN] [varchar](12) NULL,
[VIN] [varchar](17) NULL
) ON [PRIMARY]
-- Check Uniques Function
CREATE FUNCTION CheckUniques(#ClientID int)
RETURNS int -- 0: Ok to insert, 1: Cannot insert
AS
BEGIN
DECLARE #SSNCheck int
DECLARE #VinCheck int
SELECT #SSNCheck = 0
SELECT #VinCheck = 0
IF (SELECT UniqueSSN FROM Clients WHERE ID = #ClientID) = 1
BEGIN
SELECT #SSNCheck = COUNT(SSN) FROM Cases cs WHERE ClientID = #ClientID AND (SELECT COUNT(SSN) FROM Cases c2 WHERE c2.SSN = cs.SSN) > 1
END
IF (SELECT UniqueVIN FROM Clients WHERE ID = #ClientID) = 1
BEGIN
SELECT #VinCheck = COUNT(VIN) FROM Cases cs WHERE ClientID = #ClientID AND (SELECT COUNT(VIN) FROM Cases c2 WHERE c2.VIN = cs.VIN) > 1
END
RETURN #SSNCheck + #VinCheck
END
-- Add Check Constraint to table
ALTER TABLE Cases
ADD Constraint chkClientUniques CHECK(dbo.CheckUniques(ClientID) = 0)
-- Now confirm constraint using test data
-- Client A: Confirm that both duplicate SSN and VIN's are allowed
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(1, 'Alice', '111-11-1111', 'A-1234')
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(1, 'Bob', '111-11-1111', 'A-1234')
-- Client B: Confirm that Unique SSN is enforced, but duplicate VIN allowed
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(2, 'Charlie', '222-22-2222', 'B-2345') -- Should work
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(2, 'Donna', '222-22-2222', 'B-2345') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(2, 'Evan', '333-33-3333', 'B-2345') -- Should Work
-- Client C: Confirm that Unique VIN is enforced, but duplicate SSN allowed
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(3, 'Evan', '444-44-4444', 'C-3456') -- Should work
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(3, 'Fred', '444-44-4444', 'C-3456') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(3, 'Ginny', '444-44-4444', 'C-4567') -- Should work
-- Client D: Confirm that both Unique SSN and VIN are enforced
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Henry', '555-55-5555', 'D-1234') -- Should work
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Isaac', '666-66-6666', 'D-1234') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'James', '555-55-5555', 'D-2345') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Kevin', '555-55-5555', 'D-1234') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Lisa', '777-77-7777', 'D-3456') -- Should work
EDIT:
Had to modify the function a few times to catch NULL values in the dupe check, but all appears to be working now.
One approach is to use a CHECK constraint instead of a unique.
This CHECK constraint will be backed by a SCALAR function that will
take as input ClientID
cross-ref ClientID against a lookup table to see if duplicates are allowed (client.dups)
if not allowed, check for duplicates in the table
Something like
ALTER TABLE TBL ADD CONSTRAINT CK_TBL_UNIQ CHECK(dbo.IsThisOK(clientID)=1)
If you can identify rows in the table for each client, depending on your DBMS you could do something like this:
CREATE UNIQUE INDEX uq_some_col
ON the_table(some_column, other_column, client_id)
WHERE client_id IN (1,2,3);
(The above is valid for PostgreSQL and and I think SQL Server 2005)
The downsize is, that you will need to re-create that index each time a new client is added that requires those columns to be unique.
You will probably have some checks in the business layer as well, mostly to be able to show proper error messages.
That's perfectly possible now on Sql Server 2008(tested on my Sql Server 2008 box here):
create table a
(
cx varchar(50)
);
create unique index ux_cx on a(cx) where cx <> 'US';
insert into a values('PHILIPPINES'),('JAPAN'),('US'),('CANADA');
-- still ok to insert duplicate US
insert into a values('US');
-- will fail here
insert into a values('JAPAN');
Related article: http://www.ienablemuch.com/2010/12/postgresql-said-sql-server2008-said-non.html
There are a few things you can do with this, just depends on when/how you want to handle this.
You could use a CheckConstrain and modify this to do different lookups based on the client that was using it
You could use the business tier to handle this, but it will not protect you from raw database updates.
I personally have found that #1 can get too hard to maintain, especially if you get a high number of clients. I've found that doing it at the business level is a lot easier, and you can control it at a centralized location.
Thee are other options such as a table per client and others that could work as well, but these two are at least the most common that I've seen.
You could add a helper column. The column would be equal the the primary key for the application that allows duplicates, and a constant value for the other application. Then you create a unique constraint on UniqueHelper, Col1.
For the non-dupe client, it will have a constant in the helper column, forcing the column to be unique.
For the dupe column, the helper column is equal to the primary key, so the unique constraint is satisfied by that column alone. That application can add any number of dupes.
One possibility might be to use BEFORE INSERT and and BEFORE UPDATE triggers that can selectively enforce uniqueness.
And another possibility (kind of a kludge) would be to have an additional dummy field that is populated with unique values for one customer and duplicate values for the other customer. Then build a unique index on the combination of the dummy field and the visible field.
#Neil. I had asked in my comment above what your reasons were for putting all in the same table and you simply ignored that aspect of my comment and said everything was "plain and simple". Do you really want to hear the downsides of the conditional constraints approach in an Saas context?
You don't say how many different sets of rules this table in your Saas application may eventually need to incorporate. Will there be only two variations?
Performance is a consideration. Although each customer would have access to a dedicated conditional index|indices, fetching data from the base table could become slower and slower as the data from additional customers is added to it and the table grows.
If I were developing an Saas application, I'd go with dedicated transaction tables wherever appropriate. Customers could share standard tables like zipcodes, counties, and share even domain-specific tables like Products or Categories or WidgetTypes or whatever. I'd probably build dynamic SQL statements in stored procedures in which the correct table for the current customer was chosen and placed in the statement being constructed, e.g.
sql = "select * from " + DYNAMIC_ORDERS_TABLE + " where ... ")
If performance was taking a hit because the dynamic statements had to be compiled all the time, I might consider writing a dedicated stored procedure generator: sp_ORDERS_SELECT_25_v1.0 {where "25" is the id assigned to a particular user of the Saas app and there's a version suffix}.
You're going to have to use some dynamic SQL because the customer id must be appended to the WHERE-clause of every one of your ad hoc queries in order to take advantage of your conditional indexes:
sql = " select * from orders where ... and customerid = " + CURRENT_CUSTOMERID
Your conditional indexes involve your customer/user column and so that column must be made part of every query in order to ensure that only that customer's subset of rows are selected out of the table.
So, when all is said and done, you're really saving yourself the effort required to create a dedicated table and avoiding some dynamic SQL on your bread-and-butter queries. Writing dynamic SQL for bread-and-butter queries doesn't take much effort, and it's certainly less messy than having to manage multiple customer-specific indexes on the same shared table; and if you're writing dynamic SQL you could just as easily substitute the dedicated table name as append the customerid=25 clause to every query. The performance loss of dynamic SQL would be more than offset by the performance gain of dedicated tables.
P.S. Let's say your app has been running for a year or so and you have multiple customers and your table has grown large. You want to add another customer and their new set of customer-specific indexes to the large production table. Can you slipstream these new indexes and constraints during normal business hours or will you have to schedule the creation of these indexes for a time when usage is relatively light?
You don't make clear what benefit there is in having the data from separate universes mingled in the same table.
Uniqueness constraints are part of the entity definition and each entity needs its own table. I'd create separate tables.