I created two tables, Employeeinfo and Employeerequests.
Table Employeeinfo can have one unique user with columns:
id (primary key, auto increment)
name
dep
address
and table Employeerequests can have multiple requests against one unique user ID with columns
id (primary key, auto increment)
CustomerID(foreign key to Employeeinfo(ID column))
category
requests.
Now I want to design a stored procedure in such a way so that I can insert values into both tables at the same time. Please help. I am very new to SQL. Thanks in advance.
This is a bit long for a comment.
SQL Server only allows you to insert into one table in a single query. You presumably want to provide both employee and request information. So that limitation on insert is a real problem.
You can get around the limitation by creating a view combining the two table and then defining an instead of insert trigger on the view. This is explained in the documentation.
That said, you seem to not have extensive SQL knowledge. So, I would recommend simply using two separate statements, one for each table. You can wrap them in a stored procedure, if you find that convenient.
In the stored procedure, you can use Output clause of Insert statement as:
DECLARE #MyTableVar TABLE (NewCustomerID INT);
-- The OUTPUT clause have access to all the columns in the table,
-- even those not part of Insert statement ex:EmployeeID.
INSERT INTO [dbo].[Employeeinfo] ([Name], [dep], [address])
OUTPUT INSERTED.Id INTO #MyTableVar
SELECT 'Test', 'TestDep', 'TestAddress'
-- then make insert in child table as
INSERT INTO [dbo].[Employeerequests] (CustomerID, category)
SELECT NewCustomerID, 'TestCat'
FROM #MyTableVar
Sample code here...
Hope that helps!
I have 3 SQL-Statements that I would like to combine into just one so I dont have to make multiple requests to my database from my programm (java).
My DB is PostgreSQL 9.4
First one creates a new user in umgmt_users
INSERT INTO umgmt_users ("user") VALUES ('test1')
Second one gets the id of that user (db is postgres and id data type is serial, so it get assigned automatically with me/the programm not knowing what id the user will get
SELECT umgmt_users.id
FROM umgmt_users
WHERE umgmt_users.user = 'test1'
Thrird is to add the just created user with his id (which I need the second statement for) and some other values into a different table
INSERT INTO
umgmt_user_oe_fac_role ("user_id", "oe_id", "fac_id", "role_id")
VALUES ('ID OF USER test1 created in first statement', '1', '2', '1');
Is there a way to get all three Statements into one?
create user
look up the ID he got assigned
insert his ID + other values into a different table
I'm not that good at SQL, I tried to put brackets around the select and put it into the insert & also looked at UNION and WITH but can not get it to work...
EDIT: Ended up using this solution from a_horse_with_no_name
with new_user as (
INSERT INTO umgmt_users ("user") VALUES ('test1')
returning id
)
INSERT INTO umgmt_user_oe_fac_role (user_id, oe_id, fac_id, role_id)
SELECT id, 1, 2, 1
FROM new_user;
All you need is two inserts:
INSERT INTO umgmt_users ("user") VALUES ('test1');
INSERT INTO umgmt_user_oe_fac_role (user_id, oe_id, fac_id, role_id)
VALUES (lastval(), 1, 2, 1);
In order for lastval() to work correctly there must be no other statement between the two inserts and the have to be run in a single transaction (so autocommit needs to be turned off)
Alternatively you can use a data modifying CTE which is then executed as a single statement:
with new_user as (
INSERT INTO umgmt_users ("user") VALUES ('test1')
returning id
)
INSERT INTO umgmt_user_oe_fac_role (user_id, oe_id, fac_id, role_id)
SELECT id, 1, 2, 1
FROM new_user;
Please don't put numbers in single quotes.
The answer to this is : It's impossible to combine these into a single plain vanilla ANSI SQL statement.
The first and third ones talk about two different tables altogether.
The second one is a Select Statement which is a different type of statement from the other two.
I am setting up a SaaS application that multiple clients will use to enter data. However, I have certain fields that Client A may want to force unique, Client B may want to allow dupes. Obviously if I am going to allow any one client to have dupes, the table may not have a unique constraint on it. The downside is that If I want to enforce a unique constraint for some clients, I will have to go about it in some other way.
Has anyone tackled a problem like this, and if so, what are the common solutions and or potential pitfalls to look out for?
I am thinking a trigger that checks for any possible unique flags may be the only way to enforce this correctly. If I rely on the business layer, there is no guarentee that the app will do a unique check before every insert.
SOLUTION:
First I considered the Unique Index, but ruled it out as they can not do any sort of joins or lookups, only express values. And I didn't want to modify the index every time a client was added or a client's uniqueness preference changed.
Then I looked into CHECK CONSTRAINTS, and after some fooling around, built one function to return true for both hypothetical columns that a client would be able to select as unique or not.
Here is the test tables, data, and function I used to
verify that a check constraint could do all that I wanted.
-- Clients Table
CREATE TABLE [dbo].[Clients](
[ID] [int] NOT NULL,
[Name] [varchar](50) NOT NULL,
[UniqueSSN] [bit] NOT NULL,
[UniqueVIN] [bit] NOT NULL
) ON [PRIMARY]
-- Test Client Data
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(1,'A Corp',0,0)
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(2,'B Corp',1,0)
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(3,'C Corp',0,1)
INSERT INTO Clients(ID, Name, UniqueSSN, UniqueVIN) VALUES(4,'D Corp',1,1)
-- Cases Table
CREATE TABLE [dbo].[Cases](
[ID] [int] IDENTITY(1,1) NOT NULL,
[ClientID] [int] NOT NULL,
[ClaimantName] [varchar](50) NOT NULL,
[SSN] [varchar](12) NULL,
[VIN] [varchar](17) NULL
) ON [PRIMARY]
-- Check Uniques Function
CREATE FUNCTION CheckUniques(#ClientID int)
RETURNS int -- 0: Ok to insert, 1: Cannot insert
AS
BEGIN
DECLARE #SSNCheck int
DECLARE #VinCheck int
SELECT #SSNCheck = 0
SELECT #VinCheck = 0
IF (SELECT UniqueSSN FROM Clients WHERE ID = #ClientID) = 1
BEGIN
SELECT #SSNCheck = COUNT(SSN) FROM Cases cs WHERE ClientID = #ClientID AND (SELECT COUNT(SSN) FROM Cases c2 WHERE c2.SSN = cs.SSN) > 1
END
IF (SELECT UniqueVIN FROM Clients WHERE ID = #ClientID) = 1
BEGIN
SELECT #VinCheck = COUNT(VIN) FROM Cases cs WHERE ClientID = #ClientID AND (SELECT COUNT(VIN) FROM Cases c2 WHERE c2.VIN = cs.VIN) > 1
END
RETURN #SSNCheck + #VinCheck
END
-- Add Check Constraint to table
ALTER TABLE Cases
ADD Constraint chkClientUniques CHECK(dbo.CheckUniques(ClientID) = 0)
-- Now confirm constraint using test data
-- Client A: Confirm that both duplicate SSN and VIN's are allowed
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(1, 'Alice', '111-11-1111', 'A-1234')
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(1, 'Bob', '111-11-1111', 'A-1234')
-- Client B: Confirm that Unique SSN is enforced, but duplicate VIN allowed
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(2, 'Charlie', '222-22-2222', 'B-2345') -- Should work
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(2, 'Donna', '222-22-2222', 'B-2345') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(2, 'Evan', '333-33-3333', 'B-2345') -- Should Work
-- Client C: Confirm that Unique VIN is enforced, but duplicate SSN allowed
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(3, 'Evan', '444-44-4444', 'C-3456') -- Should work
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(3, 'Fred', '444-44-4444', 'C-3456') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(3, 'Ginny', '444-44-4444', 'C-4567') -- Should work
-- Client D: Confirm that both Unique SSN and VIN are enforced
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Henry', '555-55-5555', 'D-1234') -- Should work
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Isaac', '666-66-6666', 'D-1234') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'James', '555-55-5555', 'D-2345') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Kevin', '555-55-5555', 'D-1234') -- Should fail
INSERT INTO Cases (ClientID, ClaimantName, SSN, VIN) VALUES(4, 'Lisa', '777-77-7777', 'D-3456') -- Should work
EDIT:
Had to modify the function a few times to catch NULL values in the dupe check, but all appears to be working now.
One approach is to use a CHECK constraint instead of a unique.
This CHECK constraint will be backed by a SCALAR function that will
take as input ClientID
cross-ref ClientID against a lookup table to see if duplicates are allowed (client.dups)
if not allowed, check for duplicates in the table
Something like
ALTER TABLE TBL ADD CONSTRAINT CK_TBL_UNIQ CHECK(dbo.IsThisOK(clientID)=1)
If you can identify rows in the table for each client, depending on your DBMS you could do something like this:
CREATE UNIQUE INDEX uq_some_col
ON the_table(some_column, other_column, client_id)
WHERE client_id IN (1,2,3);
(The above is valid for PostgreSQL and and I think SQL Server 2005)
The downsize is, that you will need to re-create that index each time a new client is added that requires those columns to be unique.
You will probably have some checks in the business layer as well, mostly to be able to show proper error messages.
That's perfectly possible now on Sql Server 2008(tested on my Sql Server 2008 box here):
create table a
(
cx varchar(50)
);
create unique index ux_cx on a(cx) where cx <> 'US';
insert into a values('PHILIPPINES'),('JAPAN'),('US'),('CANADA');
-- still ok to insert duplicate US
insert into a values('US');
-- will fail here
insert into a values('JAPAN');
Related article: http://www.ienablemuch.com/2010/12/postgresql-said-sql-server2008-said-non.html
There are a few things you can do with this, just depends on when/how you want to handle this.
You could use a CheckConstrain and modify this to do different lookups based on the client that was using it
You could use the business tier to handle this, but it will not protect you from raw database updates.
I personally have found that #1 can get too hard to maintain, especially if you get a high number of clients. I've found that doing it at the business level is a lot easier, and you can control it at a centralized location.
Thee are other options such as a table per client and others that could work as well, but these two are at least the most common that I've seen.
You could add a helper column. The column would be equal the the primary key for the application that allows duplicates, and a constant value for the other application. Then you create a unique constraint on UniqueHelper, Col1.
For the non-dupe client, it will have a constant in the helper column, forcing the column to be unique.
For the dupe column, the helper column is equal to the primary key, so the unique constraint is satisfied by that column alone. That application can add any number of dupes.
One possibility might be to use BEFORE INSERT and and BEFORE UPDATE triggers that can selectively enforce uniqueness.
And another possibility (kind of a kludge) would be to have an additional dummy field that is populated with unique values for one customer and duplicate values for the other customer. Then build a unique index on the combination of the dummy field and the visible field.
#Neil. I had asked in my comment above what your reasons were for putting all in the same table and you simply ignored that aspect of my comment and said everything was "plain and simple". Do you really want to hear the downsides of the conditional constraints approach in an Saas context?
You don't say how many different sets of rules this table in your Saas application may eventually need to incorporate. Will there be only two variations?
Performance is a consideration. Although each customer would have access to a dedicated conditional index|indices, fetching data from the base table could become slower and slower as the data from additional customers is added to it and the table grows.
If I were developing an Saas application, I'd go with dedicated transaction tables wherever appropriate. Customers could share standard tables like zipcodes, counties, and share even domain-specific tables like Products or Categories or WidgetTypes or whatever. I'd probably build dynamic SQL statements in stored procedures in which the correct table for the current customer was chosen and placed in the statement being constructed, e.g.
sql = "select * from " + DYNAMIC_ORDERS_TABLE + " where ... ")
If performance was taking a hit because the dynamic statements had to be compiled all the time, I might consider writing a dedicated stored procedure generator: sp_ORDERS_SELECT_25_v1.0 {where "25" is the id assigned to a particular user of the Saas app and there's a version suffix}.
You're going to have to use some dynamic SQL because the customer id must be appended to the WHERE-clause of every one of your ad hoc queries in order to take advantage of your conditional indexes:
sql = " select * from orders where ... and customerid = " + CURRENT_CUSTOMERID
Your conditional indexes involve your customer/user column and so that column must be made part of every query in order to ensure that only that customer's subset of rows are selected out of the table.
So, when all is said and done, you're really saving yourself the effort required to create a dedicated table and avoiding some dynamic SQL on your bread-and-butter queries. Writing dynamic SQL for bread-and-butter queries doesn't take much effort, and it's certainly less messy than having to manage multiple customer-specific indexes on the same shared table; and if you're writing dynamic SQL you could just as easily substitute the dedicated table name as append the customerid=25 clause to every query. The performance loss of dynamic SQL would be more than offset by the performance gain of dedicated tables.
P.S. Let's say your app has been running for a year or so and you have multiple customers and your table has grown large. You want to add another customer and their new set of customer-specific indexes to the large production table. Can you slipstream these new indexes and constraints during normal business hours or will you have to schedule the creation of these indexes for a time when usage is relatively light?
You don't make clear what benefit there is in having the data from separate universes mingled in the same table.
Uniqueness constraints are part of the entity definition and each entity needs its own table. I'd create separate tables.
hi i have a problem to insert data in multiple tables. i have define primary key & reference key in tables now i want to insert data in both tables in single query.......how can i do this...........???????
Your question isn't exactly clear on what the particular problem is. I can see three possibilities:
1. You want to insert into two tables wiht a single INSERT statement
2. You want to do two inserts, but without anything else being able to 'get in the middle'
3. You want to insert into one table, then get the primary key to insert into the second table
The answer to 1. is simple:
You can't.
The answer to 2. is simple too:
BEGIN TRANSACTION
INSERT INTO <table1> (a,b,c) VALUES (1,2,3)
INSERT INTO <table2> (a,b,c) VALUES (1,2,3)
COMMIT TRANSACTION
The answer to 3. is has several possibilities. Each depending on exactly what you want to do. Most likely you want to use SCOPE_IDENTITY() but you may also want to look up ##identity and IDENT_CURRENT() to understand the various different options and complexities.
BEGIN TRANSACTION
INSERT INTO <dimension_table> (name)
VALUES ('my new item')
INSERT INTO <fact_table> (item_id, iteam_value)
VALUES (SCOPE_IDENTITY(), 1)
COMMIT TRANSACTION
This is what transactions are meant for. Standard SQL does not permit a single statement inserting into multiple tables at once. The correct way to do it is:
-- begin transaction
insert into table 1 ...
insert into table 2 ...
commit
Does your language support the INSERT ALL construct? If so, that is the best way to do this. In fact it's the only way. I posted an example of this construct in another SO thread (that example syntax comes from Oracle SQL).
The other option is to build a transactional stored procedure which inserts a record into the primary key table followed by a record into the referencing table.
And 1 of your choice to do that is use ORM (like Hibernate, NHibernate) the you make your object and set other relation to it and finally just save the main object , like:
A a;
B b;
C c;
a.set(b);
a.set(c);
DAO.saveOrUpdate(a);
you must notice your DAO.saveOrUpdate(a); line of code just work with hibernate but it insert data into 3 table A, B, C.