3 tables, 2 DBs, 1 Stored Procedure - sql

I'm a novice when it comes to Stored Procedures in SQL Server Management Studio. I have an application that I was told to make the following changes to using a stored procedure:
Step 1. User types in an item number.
Step 2. Customer name, address, etc. displays in the other fields on the same form.
There are 3 tables: Bulk orders, Small orders, and Customer information.
Bulk orders and small orders are in Database_1 and Customer information is in Database_2.
The primary key for small orders is the order number. A column in small orders contains the customer number for each order. That customer number is the primary key in the customer table.
The bulk orders table is similar.
I want to include a conditional statement that says: if order number is found in small orders table, show data from customer table that coorelates with that order number. I've attempted this multiple ways, but keep getting a "The multi-part identifier.... could not be bound" error.
I.E:
SELECT DB1.db.Customer_Table.Customer_Column AS CustomerNumber;
IF(CustomerNumber NOT LIKE '%[a-z]%')
BEGIN
SELECT * FROM db.small_orders_table;
END
ELSE
BEGIN
SELECT * FROM db.buld_orders_table;
END
Please help.

Sounds like it's 2 databases on the same server...in that case, you'll need to specify the fully qualified table name (database.schema.table) when referencing a table on the other database from where your stored procedure is found.
Database_1.db.small_orders_tables

first of all, you cannot use aliases as variables. If you want to assign a value to a variable in order to test it, you have to do a SELECT statement like SELECT #var = DB1.db.Customer_Table.Customer_Column FROM <YourTableFullName> WHERE <condition>. Then you can use the #var (which must be declared before) for your test.
About the error you're experiencing, youre using fully qualified names in a wrong way. If you're on the same server (different databases), you need to specify just the database name on the top and then the schema of your objects. Suppose to have the following database objects on the Database1:
USE Database1;
GO
CREATE TABLE dbo.Table1
(
id int IDENTITY(1, 1) NOT NULL PRIMARY KEY CLUSTERED
, val varchar(30)
);
GO
INSERT INTO dbo.Table1 (val) VALUES ('test1');
GO
INSERT INTO dbo.Table1 (val) VALUES ('test2');
GO
INSERT INTO dbo.Table1 (val) VALUES ('test3');
GO
And the following ones on Database2:
USE Database2;
GO
CREATE TABLE dbo.Table2
(
id int IDENTITY(1, 1) NOT NULL PRIMARY KEY CLUSTERED
, val varchar(30)
);
GO
Now, suppose that you want to read from the first table the value with id = 2, and then to apply your IF. Let's declare a variable and test it:
USE Database1;
GO
DECLARE #var varchar(30);
-- since you're on Database1, you don't need to specify full name
SELECT #var = val FROM dbo.Table1 WHERE id = 2;
IF #var = 'test2'
BEGIN
SELECT id, val FROM dbo.Table1;
END
ELSE
BEGIN
-- in this case the database name is needed
SELECT id, val FROM Database2.dbo.Table2;
END
GO
Does it help?

Related

Manually Checking of Value Changes in Tables for SQL

An example to the problem:
There are 3 columns present in my SQL database.
+-------------+------------------+-------------------+
| id(integer) | age(varchar(20)) | name(varchar(20)) |
+-------------+------------------+-------------------+
There are a 100 rows of different ids, ages and names. However, since many people update the database, age and name constantly change.
However, there are some boundaries to age and name:
Age has to be an integer and has to be greater than 0.
Name has to be alphabets and not numbers.
The problem is a script to check if the change of values is within the boundaries. For example, if age = -1 or Name = 1 , these values are out of the boundaries.
Right now, there is a script that does insert * into newtable where age < 0 and isnumeric(age) = 0 or isnumeric(name) = 0;
The compiled new table has rows of data that have values that are out of the boundary.
I was wondering if there is a more efficient method to do such checking in SQL. Also, i'm using microsoft sql server, so i was wondering if it is more efficient to use other languages such as C# or python to solve this issue.
You can apply check constraint. Replace 'myTable' with your table name. 'AgeCheck' and 'NameCheck' are names of the constraints. And AGE is the name of your AGE column.
ALTER TABLE myTable
ADD CONSTRAINT AgeCheck CHECK(AGE > 0 )
ALTER TABLE myTable
ADD CONSTRAINT NameCheck CHECK ([Name] NOT LIKE '%[^A-Z]%')
See more on Create Check Constraints
If you want to automatically insert the invalid data into a new table, you can create AFTER INSERT Trigger. I have given snippet for your reference. You can expand the same with additional logic for name check.
Generally, triggers are discouraged, as they make the transaction lengthier. If you want to avoid the trigger, you can have a sql agent job to do auditing on regular basis.
CREATE TRIGGER AfterINSERTTrigger on [Employee]
FOR INSERT
AS
BEGIN
DECLARE #Age TINYINT, #Id INT, Name VARCHAR(20);
SELECT #Id = ins.Id FROM INSERTED ins;
SELECT #Age = ins.Age FROM INSERTED ins;
SELECT #Name = ins.Name FROM INSERTED ins;
IF (#Age = 0)
BEGIN
INSERT INTO [EmployeeAudit](
[ID]
,[Name]
,[Age])
VALUES (#ID,
#Name,
#Age);
END
END
GO

Reuse results of SELECT query inside a stored procedure

This is probably a very simple question, but my attempts to search for an answer are thwarted by Google finding answers showing how to reuse a query by making a stored procedure instead. I want to reuse the results of a query inside a stored procedure.
Here's a cut-down example where I've chopped out NOCOUNT, XACT_ABORT, TRANSACTION, TRY, and much of the logic.
CREATE PROCEDURE Do_Something
#userId UNIQUEIDENTIFIER
AS
BEGIN
DELETE FROM LikedItems
WHERE likedItemId IN
(
SELECT Items.id FROM Items
WHERE Items.userId = #userId
)
DELETE FROM FollowedItems
WHERE followedItemId IN
(
SELECT Items.id FROM Items
WHERE Items.userId = #userId
)
END
What is the syntax to reuse the results of the duplicated nested SELECT rather than doing it twice?
You can INSERT result of the SELECT into a temporary table or table variable, but it doesn't automatically mean that the overall performance would be better. You need to measure it.
Temp Table
CREATE PROCEDURE Do_Something
#userId UNIQUEIDENTIFIER
AS
BEGIN
CREATE TABLE #Temp(id int);
INSERT INTO #Temp(id)
SELECT Items.id
FROM Items
WHERE Items.userId = #userId;
DELETE FROM LikedItems
WHERE likedItemId IN
(
SELECT id FROM #Temp
)
DELETE FROM FollowedItems
WHERE followedItemId IN
(
SELECT id FROM #Temp
)
DROP TABLE #Temp;
END
Table variable
CREATE PROCEDURE Do_Something
#userId UNIQUEIDENTIFIER
AS
BEGIN
DECLARE #Temp TABLE(id int);
INSERT INTO #Temp(id)
SELECT Items.id
FROM Items
WHERE Items.userId = #userId;
DELETE FROM LikedItems
WHERE likedItemId IN
(
SELECT id FROM #Temp
)
DELETE FROM FollowedItems
WHERE followedItemId IN
(
SELECT id FROM #Temp
)
END
You can declare a table variable to store the results of the select and then simply query that.
CREATE PROCEDURE Do_Something
#userId UNIQUEIDENTIFIER
AS
BEGIN
DECLARE #TempItems TABLE (id int)
INSERT INTO #TempItems
SELECT Items.id FROM Items
WHERE Items.userId = #userId
DELETE FROM LikedItems
WHERE likedItemId IN
(
SELECT id FROM #TempItems
)
DELETE FROM FollowedItems
WHERE followedItemId IN
(
SELECT id FROM #TempItems
)
END
If the subquery is fast and simple - no need to change anything. Item's data is in the cache (if it was not) after the first query, locks are obtained. If the subquery is slow and complicated - store it into a table variable and reuse by the same subquery as listed in the question.
If your question is not related to performance and you are beware of copy-paste: there is no copy-paste. There is the same logic, similar structure and references - yes, you will have almost the same query source code.
In general, it is not the same. Some rows could be deleted from or inserted into Items table after the first query unless your are running under SERIALIZABLE isolation level. Many different things could happen during first delete, between first and second delete statements. Each delete statement also requires it's own execution plan - thus all the information about tables affected and joins must be provided to SERVER anyway. You need to filter by the same source again - yes, you provide subquery with the same source again. There is no "twice" or "reuse" of a partial code. Data collected by a complicated query - yes, it can be reused (without running the same complicated query - by simple querying from prepared source) via temp tables/table variables as mentioned before.

Get SCOPE_IDENTITY value when inserting bulk records for SQL TableType

I have following table structure, for convenience purpose I am only marking individual columns
Table_A (Id, Name, Desc)
Table_1 (Id this is identity column, Name....)
Table_2 (Id this is identity column, Table_A_Id, Table_1_Id)
The relationship between Table_1 and Table_2 is 1...*
Now I have created a table type for Table_A called TType_Table_A (which only contains Id as column and from my C# app I send multiple records). I have achieved this bulk insert functionality as desired.
What I need is when I insert records into Table_2 from TType_Table_A say with below statements, I would like to capture the Id of Table_2 for each record inserted
declare #count int = (select count(*) from #TType_Table_A); --a variable declared for TType_Table_A
if(#count > 0)
begin
insert into Table_2(Table_A_Id,Table_1_Id)
SELECT #SomeValue, #SomeValueAsParameter FROM #TType_Table_A;
end;
Now say if 2 records are inserted, I would like to capture the Id for each of these 2 records.
Any input/help is appreciated
This is what I know how it can be achieved, but I want to reduce DB calls from my app or user cursor in stored procedure
Insert record in Table_1 and return back the Id Loop.....through records and insert record in Table_2 and return back the Id
OR
Use cursor in stored procedure when inserting/selecting from TableType
I assume this is Sql Server? Then you can make use of the OUTPUT clause, like so:
declare #NewId table (MyNewId INT)
insert into Table_2(Table_A_Id,Table_1_Id)
output INSERTED.MyNewId INTO #TempTable(MyNewID)
SELECT SomeValue, SomeValueAsParameter FROM #TType_Table_A;
SELECT * FROM #NewId

Conditional INSERT subquery of larger insert

I have a set of tables which track access logs. The logs contain data about the user's access including user agent strings. Since we know that user agent strings are, for all intents and purposes, practically unlimited, these would need to be stored as a text/blob type. Given the high degree of duplication, I'd like to store these in a separate reference table and have my main access log table have an id linking to it. Something like this:
accesslogs table:
username|accesstime|ipaddr|useragentid
useragents table:
id|crc32|md5|useragent
(the hashes are for indexing and quicker searching)
Here's the catch, i am working inside a framework that doesn't give me access to create fancy things like foreign keys. In addition, this has to be portable across multiple DBMSs. I have the join logic worked out for doing SELECTS but I am having trouble figuring out how to insert properly. I want to do something like
INSERT INTO accesslogs (username, accesstime, ipaddr, useragentid)
VALUES
(
:username,
:accesstime,
:ipaddr,
(
CASE WHEN
(
SELECT id
FROM useragents
WHERE
useragents.crc32 = :useragentcrc32
AND
useragents.md5 = :useragentmd5
AND useragents.useragent LIKE :useragent
) IS NOT NULL
THEN
THAT_SAME_SELECT_FROM_ABOVE()
ELSE
GET_INSERT_ID_FROM(INSERT INTO useragents (crc32, md5, useragent) VALUES (:useragentcrc32, :useragentmd5, :useragent))
)
)
Is there any way to do this that doesn't use pseudofunctions whose names i just made up? The two parts i'm missing is how to get the select from above and how to get the new id from a subquery insert.
You will need to do separate inserts to each of the tables. You can not do insert into both at the same time.
If you use MS SQL Server once you inserted you can get inserted id by SCOPE_IDENTITY(), and then use it in another table insert.
I'm not sure there is a cross platform way of doing this. You may have to have a lot of special cases for each supported back end. For Example, for SQL Server you'd use the merge statement as the basis of the solution. Other DBMSs have different names if they support it at all. Searching for "Upsert" might help.
Edt - added the second query to be explicit, and added parameters.
-- SQL Server Example
--Schema Defs
Create Table Test (
id int not null identity primary key,
UserAgent nvarchar(50)
)
Create Table WebLog (
UserName nvarchar(50),
APAddress nvarchar(50),
UserAgentID int
)
Create Unique Index UQ_UserAgent On Test(UserAgent)
-- Values parsed from log
Declare
#UserName nvarchar(50) = N'Loz',
#IPAddress nvarchar(50) = N'1.1.1.1',
#UserAgent nvarchar(50) = 'Test'
Declare #id int
-- Optionally Begin Transaction
-- Insert if necessary and get id
Merge
Into dbo.Test as t
Using
(Select #UserAgent as UserAgent) as s
On
t.[UserAgent] = s.[UserAgent]
When Matched Then
Update Set #id = t.id
When Not Matched Then
Insert (UserAgent) Values (s.UserAgent);
If #id Is Null Set #id = scope_identity()
Insert Into WebLog (UserName, IPAddress, UserAgentID) Values (#UserName, #IPAddress, #id)
-- Optionally Commit Transaction

Does anyone know a neat trick for reusing identity values?

Typically when you specify an identity column you get a convenient interface in SQL Server for asking for particular row.
SELECT * FROM $IDENTITY = #pID
You don't really need to concern yourself with the name if the identity column because there can only be one.
But what if I have a table which mostly consists of temporary data. Lots of inserts and lots of deletes. Is there a simple way for me to reuse the identity values.
Preferably I would want to be able to write a function that would return say NEXT_SMALLEST($IDENTITY) as next identity value and do so in a fail-safe manner.
Basically find the smallest value that's not in use. That's not entirely trivial to do, but what I want is to be able to tell SQL Server that this is my function that will generate the identity values. But what I know is that no such function exists...
I want to...
Implement global data base IDs, I need to provide a default value that I'm in control of.
My idea was based around that I should be able to have a table with all known IDs and then every row ID from some other table that needed a global ID would reference that table. The default value would be provided by something like
INSERT INTO GlobalID
RETURN SCOPE_IDENTITY()
No; it's not unique if it can be reused.
Why do you want to re-use them? Why do you concern yourself with this field? If you want to be in control of it, don't make it an identity; create your own scheme and use that.
Don't reuse identities, you'll just shoot your self in the foot. Use a large enough value so that it never rolls over (64 bit big int).
To find missing gaps in a sequence of numbers join the table against itself with a +/- 1 difference:
SELECT a.id
FROM table AS a
LEFT OUTER JOIN table AS b ON a.id = b.id+1
WHERE b.id IS NULL;
This query will find the numbers in the id sequence for which id-1 is not in the table, ie. contiguous sequence start numbers. You can then use SET IDENTITY INSERT OFF to insert a specific id and reuse a number. The cost of doing so is overwhelming (both runtime and code complexity) compared with the an ordinary identity based insert.
If you really want to reset Identity value to the lowest,
here is the trick you can use through DBCC CHECKIDENT
Basically following sql statements resets identity value so that identity value restarts from the lowest possible number
create table TT (id int identity(1, 1))
GO
insert TT default values
GO 10
select * from TT
GO
delete TT where id between 5 and 10
GO
--; At this point, next ID will be 11, not 5
select * from TT
GO
insert TT default values
GO
--; as you can see here, next ID is indeed 11
select * from TT
GO
--; Now delete ID = 11
--; so that we can reseed next highest ID to 5
delete TT where id = 11
GO
--; Now, let''s reseed identity value to the lowest possible identity number
declare #seedID int
select #seedID = max(id) from TT
print #seedID --; 4
--; We reseed identity column with "DBCC CheckIdent" and pass a new seed value
--; But we can't pass a seed number as argument, so let's use dynamic sql.
declare #sql nvarchar(200)
set #sql = 'dbcc checkident(TT, reseed, ' + cast(#seedID as varchar) + ')'
exec sp_sqlexec #sql
GO
--; Now the next
insert TT default values
GO
--; as you can see here, next ID is indeed 5
select * from TT
GO
I guess we would really need to know why you want to reuse your identity column. The only reason I can think of is because of the temporary nature of your data you might exhaust the possible values for the identity. That is not really likely, but if that is your concern, you can use uniqueidentifiers (guids) as the primary key in your table instead.
The function newid() will create a new guid and can be used in insert statements (or other statements). Then when you delete the row, you don't have any "holes" in your key because guids are not created in that order anyway.
[Syntax assumes SQL2008....]
Yes, it's possible. You need to two management tables, and two triggers on each participating table.
First, the management tables:
-- this table should only ever have one row
CREATE TABLE NextId (Id INT)
INSERT NextId VALUES (1)
GO
CREATE TABLE RecoveredIds (Id INT NOT NULL PRIMARY KEY)
GO
Then, the triggers, two on each table:
CREATE TRIGGER tr_TableName_RecoverId ON TableName
FOR DELETE AS BEGIN
IF ##ROWCOUNT = 0 RETURN
INSERT RecoveredIds (Id) SELECT Id FROM deleted
END
GO
CREATE TRIGGER tr_TableName_AssignId ON TableName
INSTEAD OF INSERT AS BEGIN
DECLARE #rowcount INT = ##ROWCOUNT
IF #rowcount = 0 RETURN
DECLARE #required INT = #rowcount
DECLARE #new_ids TABLE (Id INT PRIMARY KEY)
DELETE TOP (#required) OUTPUT DELETED.Id INTO #new_ids (Id) FROM RecoveredIds
SET #rowcount = ##ROWCOUNT
IF #rowcount < #required BEGIN
DECLARE #output TABLE (Id INT)
UPDATE NextId SET Id = Id + (#required-#rowcount)
OUTPUT DELETED.Id INTO #output
-- this assumes you have a numbers table around somewhere
INSERT #new_ids (Id)
SELECT n.Number+o.Id-1 FROM Numbers n, #output o
WHERE n.Number BETWEEN 1 AND #required-#rowcount
END
SET IDENTITY_INSERT TableName ON
;WITH inserted_CTE AS (SELECT _no = ROW_NUMBER() OVER (ORDER BY Id), * FROM inserted)
, new_ids_CTE AS (SELECT _no = ROW_NUMBER() OVER (ORDER BY Id), * FROM #new_ids)
INSERT TableName (Id, Attr1, Attr2)
SELECT n.Id, i.Attr1, i.Attr2
FROM inserted_CTE i JOIN new_ids_CTE n ON i._no = n._no
SET IDENTITY_INSERT TableName OFF
END
You could script the triggers out easily enough from system tables.
You would want to test this for concurrency. It should work as is, syntax errors notwithstanding: The OUTPUT clause guarantees atomicity of id lookup->increment as one step, and the entire operation occurs within a transaction, thanks to the trigger.
TableName.Id is still an identity column. All the common idioms like $IDENTITY and SCOPE_IDENTITY() will still work.
There is no central table of ids by table, but you could create one easily enough.
I don't have any help for finding the values not in use but if you really want to find them and set them yourself, you can use
set identity_insert on ....
in your code to do so.
I'm with everyone else though. Why bother? Don't you have a business problem to solve?