Equivalent of Oracle's RowID in SQL Server - sql

What's the equivalent of Oracle's RowID in SQL Server?

From the Oracle docs
ROWID Pseudocolumn
For each row in the database, the ROWID pseudocolumn returns the
address of the row. Oracle Database rowid values contain information
necessary to locate a row:
The data object number of the object
The data block in the datafile in which the row resides
The position of the row in the data block (first row is 0)
The datafile in which the row resides (first file is 1). The file
number is relative to the tablespace.
The closest equivalent to this in SQL Server is the rid which has three components File:Page:Slot.
In SQL Server 2008 it is possible to use the undocumented and unsupported %%physloc%% virtual column to see this. This returns a binary(8) value with the Page ID in the first four bytes, then 2 bytes for File ID, followed by 2 bytes for the slot location on the page.
The scalar function sys.fn_PhysLocFormatter or the sys.fn_PhysLocCracker TVF can be used to convert this into a more readable form
CREATE TABLE T(X INT);
INSERT INTO T VALUES(1),(2)
SELECT %%physloc%% AS [%%physloc%%],
sys.fn_PhysLocFormatter(%%physloc%%) AS [File:Page:Slot]
FROM T
Example Output
+--------------------+----------------+
| %%physloc%% | File:Page:Slot |
+--------------------+----------------+
| 0x2926020001000000 | (1:140841:0) |
| 0x2926020001000100 | (1:140841:1) |
+--------------------+----------------+
Note that this is not leveraged by the query processor. Whilst it is possible to use this in a WHERE clause
SELECT *
FROM T
WHERE %%physloc%% = 0x2926020001000100
SQL Server will not directly seek to the specified row. Instead it will do a full table scan, evaluate %%physloc%% for each row and return the one that matches (if any do).
To reverse the process carried out by the 2 previously mentioned functions and get the binary(8) value corresponding to known File,Page,Slot values the below can be used.
DECLARE #FileId int = 1,
#PageId int = 338,
#Slot int = 3
SELECT CAST(REVERSE(CAST(#PageId AS BINARY(4))) AS BINARY(4)) +
CAST(REVERSE(CAST(#FileId AS BINARY(2))) AS BINARY(2)) +
CAST(REVERSE(CAST(#Slot AS BINARY(2))) AS BINARY(2))

I have to dedupe a very big table with many columns and speed is important. Thus I use this method which works for any table:
delete T from
(select Row_Number() Over(Partition By BINARY_CHECKSUM(*) order by %%physloc%% ) As RowNumber, * From MyTable) T
Where T.RowNumber > 1

If you want to uniquely identify a row within the table rather than your result set, then you need to look at using something like an IDENTITY column. See "IDENTITY property" in the SQL Server help. SQL Server does not auto-generate an ID for each row in the table as Oracle does, so you have to go to the trouble of creating your own ID column and explicitly fetch it in your query.
EDIT: for dynamic numbering of result set rows see below, but that would probably an equivalent for Oracle's ROWNUM and I assume from all the comments on the page that you want the stuff above.
For SQL Server 2005 and later you can use the new Ranking Functions function to achieve dynamic numbering of rows.
For example I do this on a query of mine:
select row_number() over (order by rn_execution_date asc) as 'Row Number', rn_execution_date as 'Execution Date', count(*) as 'Count'
from td.run
where rn_execution_date >= '2009-05-19'
group by rn_execution_date
order by rn_execution_date asc
Will give you:
Row Number Execution Date Count
---------- ----------------- -----
1 2009-05-19 00:00:00.000 280
2 2009-05-20 00:00:00.000 269
3 2009-05-21 00:00:00.000 279
There's also an article on support.microsoft.com on dynamically numbering rows.

Check out the new ROW_NUMBER function. It works like this:
SELECT ROW_NUMBER() OVER (ORDER BY EMPID ASC) AS ROWID, * FROM EMPLOYEE

Several of the answers above will work around the lack of a direct reference to a specific row, but will not work if changes occur to the other rows in a table. That is my criteria for which answers fall technically short.
A common use of Oracle's ROWID is to provide a (somewhat) stable method of selecting rows and later returning to the row to process it (e.g., to UPDATE it). The method of finding a row (complex joins, full-text searching, or browsing row-by-row and applying procedural tests against the data) may not be easily or safely re-used to qualify the UPDATE statement.
The SQL Server RID seems to provide the same functionality, but does not provide the same performance. That is the only issue I see, and unfortunately the purpose of retaining a ROWID is to avoid repeating an expensive operation to find the row in, say, a very large table. Nonetheless, performance for many cases is acceptable. If Microsoft adjusts the optimizer in a future release, the performance issue could be addressed.
It is also possible to simply use FOR UPDATE and keep the CURSOR open in a procedural program. However, this could prove expensive in large or complex batch processing.
Caveat: Even Oracle's ROWID would not be stable if the DBA, between the SELECT and the UPDATE, for example, were to rebuild the database, because it is the physical row identifier. So the ROWID device should only be used within a well-scoped task.

If you want to permanently number the rows in the table, Please don't use the RID solution for SQL Server. It will perform worse than Access on an old 386. For SQL Server simply create an IDENTITY column, and use that column as a clustered primary key. This will place a permanent, fast Integer B-Tree on the table, and more importantly every non-clustered index will use it to locate rows. If you try to develop in SQL Server as if it's Oracle you'll create a poorly performing database. You need to optimize for the engine, not pretend it's a different engine.
also, please don't use the NewID() to populate the Primary Key with GUIDs, you'll kill insert performance. If you must use GUIDs use NewSequentialID() as the column default. But INT will still be faster.
If on the other hand, you simply want to number the rows that result from a query, use the RowNumber Over() function as one of the query columns.

if you just want basic row numbering for a small dataset, how about someting like this?
SELECT row_number() OVER (order by getdate()) as ROWID, * FROM Employees

From http://vyaskn.tripod.com/programming_faq.htm#q17:
Oracle has a rownum to access rows of a table using row number or row id. Is there any equivalent for that in SQL Server? Or how to generate
output with row number in SQL Server?
There is no direct equivalent to Oracle's rownum or row id in SQL
Server. Strictly speaking, in a relational database, rows within a
table are not ordered and a row id won't really make sense. But if you
need that functionality, consider the following three alternatives:
Add an IDENTITY column to your table.
Use the following query to generate a row number for each row. The following query generates a row number for each row in the authors
table of pubs database. For this query to work, the table must have a
unique key.
SELECT (SELECT COUNT(i.au_id)
FROM pubs..authors i
WHERE i.au_id >= o.au_id ) AS RowID,
au_fname + ' ' + au_lname AS 'Author name'
FROM pubs..authors o
ORDER BY RowID
Use a temporary table approach, to store the entire resultset into a temporary table, along with a row id generated by the IDENTITY()
function. Creating a temporary table will be costly, especially when
you are working with large tables. Go for this approach, if you don't
have a unique key in your table.

ROWID is a hidden column on Oracle tables, so, for SQL Server, build your own. Add a column called ROWID with a default value of NEWID().
How to do that: Add column, with default value, to existing table in SQL Server

Please see http://msdn.microsoft.com/en-us/library/aa260631(v=SQL.80).aspx
In SQL server a timestamp is not the same as a DateTime column. This is used to uniquely identify a row in a database, not just a table but the entire database.
This can be used for optimistic concurrency. for example
UPDATE [Job] SET [Name]=#Name, [XCustomData]=#XCustomData WHERE ([ModifiedTimeStamp]=#Original_ModifiedTimeStamp AND [GUID]=#Original_GUID
the ModifiedTimeStamp ensures that you are updating the original data and will fail if another update has occurred to the row.

I took this example from MS SQL example and you can see the #ID can be interchanged with integer or varchar or whatever. This was the same solution I was looking for, so I am sharing it. Enjoy!!
-- UPDATE statement with CTE references that are correctly matched.
DECLARE #x TABLE (ID int, Stad int, Value int, ison bit);
INSERT #x VALUES (1, 0, 10, 0), (2, 1, 20, 0), (6, 0, 40, 0), (4, 1, 50, 0), (5, 3, 60, 0), (9, 6, 20, 0), (7, 5, 10, 0), (8, 8, 220, 0);
DECLARE #Error int;
DECLARE #id int;
WITH cte AS (SELECT top 1 * FROM #x WHERE Stad=6)
UPDATE x -- cte is referenced by the alias.
SET ison=1, #id=x.ID
FROM cte AS x
SELECT *, #id as 'random' from #x
GO

You can get the ROWID by using the methods given below :
1.Create a new table with auto increment field in it
2.Use Row_Number analytical function to get the sequence based on your requirement.I would prefer this because it helps in situations where you are you want the row_id on ascending or descending manner of a specific field or combination of fields
Sample:Row_Number() Over(Partition by Deptno order by sal desc)
Above sample will give you the sequence number based on highest salary of each department.Partition by is optional and you can remove it according to your requirements

Please try
select NEWID()
Source: https://learn.microsoft.com/en-us/sql/t-sql/data-types/uniqueidentifier-transact-sql

Related

Does Oracle allow an SQL INSERT INTO using a SELECT statement for VALUES if the destination table has an GENERATE ALWAYS AS IDENTITY COLUMN

I am trying to insert rows into an Oracle 19c table that we recently added a GENERATED ALWAYS AS IDENTITY column (column name is "ID"). The column should auto-increment and not need to be specified explicitly in an INSERT statement. Typical INSERT statements work - i.e. INSERT INTO table_name (field1,field2) VALUES ('f1', 'f2'). (merely an example). The ID field increments when typical INSERT is executed. But the query below, that was working before the addition of the IDENTITY COLUMN, is now not working and returning the error: ORA-00947: not enough values.
The field counts are identical with the exception of not including the new ID IDENTITY field, which I am expecting to auto-increment. Is this statement not allowed with an IDENTITY column?
Is the INSERT INTO statement, using a SELECT from another table, not allowing this and producing the error?
INSERT INTO T.AUDIT
(SELECT r.IDENTIFIER, r.SERIAL, r.NODE, r.NODEALIAS, r.MANAGER, r.AGENT, r.ALERTGROUP,
r.ALERTKEY, r.SEVERITY, r.SUMMARY, r.LASTMODIFIED, r.FIRSTOCCURRENCE, r.LASTOCCURRENCE,
r.POLL, r.TYPE, r.TALLY, r.CLASS, r.LOCATION, r.OWNERUID, r.OWNERGID, r.ACKNOWLEDGED,
r.EVENTID, r.DELETEDAT, r.ORIGINALSEVERITY, r.CATEGORY, r.SITEID, r.SITENAME, r.DURATION,
r.ACTIVECLEARCHANGE, r.NETWORK, r.EXTENDEDATTR, r.SERVERNAME, r.SERVERSERIAL, r.PROBESUBSECONDID
FROM R.STATUS r
JOIN
(SELECT SERVERSERIAL, MAX(LASTOCCURRENCE) as maxlast
FROM T.AUDIT
GROUP BY SERVERSERIAL) gla
ON r.SERVERSERIAL = gla.SERVERSERIAL
WHERE (r.LASTOCCURRENCE > SYSDATE - (1/1440)*5 AND gla.maxlast < r.LASTOCCURRENCE)
) )
Thanks for any help.
Yes, it does; your example insert
INSERT INTO table_name (field1,field2) VALUES ('f1', 'f2')
would also work as
INSERT INTO table_name (field1,field2) SELECT 'f1', 'f2' FROM DUAL
db<>fiddle demo
Your problematic real insert statement is not specifying the target column list, so when it used to work it was relying on the columns in the table (and their data types) matching the results of the query. (This is similar to relying on select *, and potentially problematic for some of the same reasons.)
Your query selects 34 values, so your table had 34 columns. You have now added a 35th column to the table, your new ID column. You know that you don't want to insert directly into that column, but in general Oracle doesn't, at least at the point it's comparing the query with the table columns. The table has 35 columns, so as you haven't said otherwise as part of the statement, it is expecting 35 values in the select list.
There's no way for Oracle to know which of the 35 columns you're skipping. Arguably it could guess based on the identity column, but that would be more work and inconsistent, and it's not unreasonable for it to insist you do the work to make sure it's right. It's expecting 35 values, it sees 34, so it throws an error saying there are not enough values - which is true.
Your question sort of implies you think Oracle might be doing something special to prevent the insert ... select ... syntax if there is an identity column, but in facts it's the opposite - it isn't doing anything special, and it's reporting the column/value count mismatch as it usually would.
So, you have to list the columns you are populating - you can't automatically skip one. So you statement needs to be:
INSERT INTO T.AUDIT (IDENTIFIER, SERIAL, NODE, ..., PROBESUBSECONDID)
SELECT r.IDENTIFIER, r.SERIAL, r.NODE, ..., r.PROBESUBSECONDID
FROM ...
using the actual column names of course if they differ from the query column names.
If you can't change that insert statement then you could make the ID column invisible; but then you would have to specify it explicitly in queries, as select * won't see it - but then you shouldn't rely on * anyway.
db<>fiddle

Get identity of row inserted in Snowflake Datawarehouse

If I have a table with an auto-incrementing ID column, I'd like to be able to insert a row into that table, and get the ID of the row I just created. I know that generally, StackOverflow questions need some sort of code that was attempted or research effort, but I'm not sure where to begin with Snowflake. I've dug through their documentation and I've found nothing for this.
The best I could do so far is try result_scan() and last_query_id(), but these don't give me any relevant information about the row that was inserted, just confirmation that a row was inserted.
I believe what I'm asking for is along the lines of MS SQL Server's SCOPE_IDENTITY() function.
Is there a Snowflake equivalent function for MS SQL Server's SCOPE_IDENTITY()?
EDIT: for the sake of having code in here:
CREATE TABLE my_db..my_table
(
ROWID INT IDENTITY(1,1),
some_number INT,
a_time TIMESTAMP_LTZ(9),
b_time TIMESTAMP_LTZ(9),
more_data VARCHAR(10)
);
INSERT INTO my_db..my_table
(
some_number,
a_time,
more_data
)
VALUES
(1, my_time_value, some_data);
I want to get to that auto-increment ROWID for this row I just inserted.
NOTE: The answer below can be not 100% correct in some very rare cases, see the UPDATE section below
Original answer
Snowflake does not provide the equivalent of SCOPE_IDENTITY today.
However, you can exploit Snowflake's time travel to retrieve the maximum value of a column right after a given statement is executed.
Here's an example:
create or replace table x(rid int identity, num int);
insert into x(num) values(7);
insert into x(num) values(9);
-- you can insert rows in a separate transaction now to test it
select max(rid) from x AT(statement=>last_query_id());
----------+
MAX(RID) |
----------+
2 |
----------+
You can also save the last_query_id() into a variable if you want to access it later, e.g.
insert into x(num) values(5);
set qid = last_query_id();
...
select max(rid) from x AT(statement=>$qid);
Note - it will be usually correct, but if the user e.g. inserts a large value into rid manually, it might influence the result of this query.
UPDATE
Note, I realized the code above might rarely generate incorrect answer.
Since the execution order of various phases of a query in a distributed system like Snowflake can be non-deterministic, and Snowflake allows concurrent INSERT statements, the following might happen
Two queries, Q1 and Q2, do a simple single row INSERT, start at roughly the same time
Q1 starts, is a bit ahead
Q2 starts
Q1 creates a row with value 1 from the IDENTITY column
Q2 creates a row with value 2 from the IDENTITY column
Q2 gets ahead of Q1 - this is the key part
Q2 commits, is marked as finished at time T2
Q1 commits, is marked as finished at time T1
Note that T1 is later than T2. Now, when we try to do SELECT ... AT(statement=>Q1), we will see the state as-of T1, including all changes from statements before, hence including the value 2 from Q2. Which is not what we want.
The way around it could be to add a unique identifier to each INSERT (e.g. from a separate SEQUENCE object), and then use a MAX.
Sorry. Distributed transactions are hard :)
If I have a table with an auto-incrementing ID column, I'd like to be
able to insert a row into that table, and get the ID of the row I just
created.
FWIW, here's a slight variation of the current accepted answer (using Snowflake's 'Time Travel' feature) that gives any column values "of the row I just created." It applies to auto-incrementing sequences and more generally to any column configured with a default (e.g. CURRENT_TIMESTAMP() or UUID_STRING()). Further, I believe it avoids any inconsistencies associated with a second query utilizing MAX().
Assuming this table setup:
CREATE TABLE my_db.my_table
(
ROWID INT IDENTITY(1,1),
some_number INT,
a_time TIMESTAMP_LTZ(9),
b_time TIMESTAMP_LTZ(9),
more_data VARCHAR(10)
);
Make sure the 'Time Travel' feature (change_tracking) is enabled for this table with:
ALTER TABLE my_db.my_table SET change_tracking = true;
Perform the INSERT per usual:
INSERT INTO my_db.my_table
(
some_number,
a_time,
more_data
)
VALUES
(1, my_time_value, some_data);
Use the CHANGES clause with BEFORE(statement... and END(statement... specified as LAST_QUERY_ID() to SELECT the row(s) added to my_table which are the precise result of the previous INSERT statement (with column values that existed the moment the row(s) was(were) added, including any defaults):
SET insertQueryId=LAST_QUERY_ID();
SELECT
ROWID,
some_number,
a_time,
b_time,
more_data
FROM my_db.my_table
CHANGES(information => default)
BEFORE(statement => $insertQueryId)
END(statement => $insertQueryId);
For more information on the CHANGES, BEFORE, END clauses see the Snowflake documentation here.

Add timestamp to existing table

I have a SQL Server table with just 3 columns, one of which is of type varbinary. The data in this column is actually a Json document which among other properties contains information about when the data was last modified. Unfortunately the SQL table itself does not contain information about when its rows were modified.
Now when doing sorting and filtering of the data I of course don't want fetch all rows in order to find e.g. the latest 100 entries.
So my question is: does SQL Server somehow remember when a row was added/modified? I have tried adding a timestamp and this is applied to all existing rows but this is applied randomly I think, because the sorting doesn't work. I don't need a datetime or anything, I just want to be able sort the records based on when they were last modified.
Thanks
For those looking to insert a tamestamp column of type DateTime into an existing DB table, you can do this like so:
ALTER TABLE TestTable
ADD DateInserted DATETIME NOT NULL DEFAULT (GETDATE());
The existing records will automatically get the value equal to the date/time of the moment when column is added.
New records will get up-to-date value upon insertion.
SQL Server will not track historically when a row was inserted or modified so you need to rely on the JSON data to figure that out yourself. You are going to need a new column to make this efficient to query. Once you have your new column you have some options:
Loop through all your records populating the new column with the relevant value from the JSON data.
If your version of SQL Server is recent enough, you can query the JSON data directly. Populate this column using a query like this:
UPDATE MyTable
SET MyNewColumn = JSON_VALUE(JsonDataColumn, '$.Customer.DateCreated')
The downside of this method is that you need to maintain this
Make SQL Server compute the value from the JSON automatically, for example:
ALTER TABLE MyTable
ADD MyNewColumn AS JSON_VALUE(JsonDataColumn, '$.Customer.DateCreated')
And, create an index to make it efficient:
CREATE INDEX IX_MyTable_MyNewColumn
ON MyTable(MyNewColumn)
Use a new column CreatedDate and store datetime every time you make an Insert.
You could use GetDate() for inserting date in the column.
A UpdatedDate column can be used for updates.
in order to find e.g. the latest 100 entries.
Timestamp is indeed what you need.
It's ever-increasing value, it's updated automatically, so you are always able to find all last modified/inserted rows.
Here is an example:
create table dbo.test1 (id int);
insert into dbo.test1 values(1), (2), (3);
alter table dbo.test1 add ts timestamp;
update dbo.test1
set id = 10
where id = 2
select top 1 *
from dbo.test1
order by ts desc;
--id ts
--10 0x000000001FCFABD2
insert into dbo.test1 (id)
values (100);
select top 1 *
from dbo.test1
order by ts desc;
--id ts
--100 0x000000001FCFABD3
As you see, you always get the last modified/inserted row.
For your purpose just use
select top 100 *
...
order by ts desc;
Thanks. Apparently I didn't look hard enough before I posted this question. The question has been asked a couple of times before and the answer is: Nope! There is no easy solution to this.
SQL Server does not keep track of when a record was created or modified, which was somehow what I was looking for. So I will go for the next best solution, which is probably to create a datetime column, retrieve the modified date from the Json document and then update the record. Or rather, the 1,4 million records:-(

C# SqlParameter - provide SQL (Microsoft SQL)

I am currently tasked with a project on a database whose schema cannot be changed. I need to insert a new row into a table that requires an ID to be unique, but the original creators of the structure did not set this value to autoincrement. To go around this, I have been using code akin to:
(SELECT TOP 1 [ID] from [Table] ORDER BY [ID] DESC) + 1
when giving the value of the ID field, basically having an inner query of sorts. Problem is that a few lines down, I need that ID I just inputted. If I could set a SQLParameter to output for this column, I could get the value it was set to, problem is I'm using SQL, and not a hard value like I do with other SQLParameters. Can't I use SQL in place of just a value?
This is a potential high volume exchange, so I'd rather not do 2 different queries (one to get id, then one to insert).
You say you cannot change the schema, but can you add an additional table to the project that does an autoincrement column? Then you could use that table to (safely) create your new IDs and return them to your code.
This is similar to how Oracle does IDs, and sometimes vendor applications for sql server that also run on Oracle will use that approach just to help minimize the differences between the two databases.
Update:
Ah, I just spotted your comment to the other answer here. In that case, the only other thing I can think that might work is to put your two statements (insert a new ID, and then read back the new ID) inside a transaction with the SERIALIZABLE isolation level. And that just kinda sucks, because it leaves you open to performance and locking gotchas.
Is it possible for you to create a stored procedure in the database to do this and the return value of the stored procedure will then return the ID that you need?
I'm a bit confused about where you need to use this ID. If it inside of the same stored proc just use this method:
DECLARE #NewId int
SELECT TOP 1 #NewId = [ID] + 1 from [Table] ORDER BY [ID] DESC
SELECT #NewId
You can put more than one SQL statement in a single SqlCommand. So you could easily do something along the lines of what Abe suggested:
DECLARE #NewId int
SELECT TOP 1 #NewId = [ID] + 1 from [Table] ORDER BY [ID] DESC
INSERT INTO [Table] (ID, ...) VALUES (#NewId, ...)
SELECT #NewId
Then you just call ExecuteScalar on your SqlCommand, and it will do the INSERT and then return the ID it used.

T-SQL: Use t-sql while routine return value in SELECT

I have a T-SQL routine that copies user information from one table 'Radius' to another 'Tags'. However, as the rows are transfered, I would also like to include a unique randomly generated code in the INSERT (3 chars long). The code is generated by the WHILE loop below. Any way to do this?
INSERT Tags (UserID, JobID, Code)
SELECT UserID, #JobID, ?????
FROM Radius
Unique random code generator:
WHILE EXISTS (SELECT * FROM Tags WHERE Code = #code)
BEGIN
select #code=#code+char(n) from
(
select top 3 number as n from master..spt_values
where type='p' and number between 48 and 57 or number between 65 and 90
order by newid()
)
END
CLARIFICATION: The reason for doing this is that I want to keep the random code generation logic at the level of the SQL stack. Implementing this in the app code would require me to check the db everytime a potential random code is generated to see if it is unique. As the number of code records increases so will the number of calls to the db as probability increases that there will be more duplicate codes generated before a unique one is generated.
Part One, Generate a table with all possible values
DECLARE #i int
CREATE TABLE #AllChars(value CHAR(1))
SET #i=48
WHILE #i<=57
BEGIN
INSERT INTO #Allchars(value) VALUES(CHAR(#i))
SET #i=#i+1
END
SET #i=65
WHILE #i<=90
BEGIN
INSERT INTO #Allchars(value) VALUES(CHAR(#i))
SET #i=#i+1
END
CREATE TABLE AllCodes(value CHAR(3),
CONSTRAINT PK_AllChars PRIMARY KEY CLUSTERED(value))
INSERT INTO AllCodes(value)
SELECT AllChars1.Value+AllChars2.Value+AllChars3.Value
FROM #AllChars AS AllChars1,#AllChars AS AllChars2,#AllChars AS AllChars3
This is a one off operation and takes around 1 second to run on SQL Azure. Now that you have all possible values in a table any future inserts become, something along the lines of
SELECT
RadiusTable.UserID,
RadiusTable.JobID,
IDTable.Value
FROM
(
SELECT ROW_NUMBER() OVER (ORDER BY UserID,JobID) As RadiusRow,
UserID,JobID
FROM Radius
) AS RadiusTable INNER JOIN
(
SELECT ROW_NUMBER() OVER (ORDER BY newID()) As IDRow,
Value
FROM AllCodes
) AS IDTable ON RadiusTable.RadiusRow = IDTable.IDRow
Before going with any of these schemes you had better be certain that you are not going to have more than 46656 rows in your table otherwise you will run out of unique ID Values.
I do not know if this is possible and suitable for your situation, but to me it seems that a scalar-valued function would be a solution.
Well, let me start over then.
This seems kind of ugly but it might work: newid() inside sql server function
The accepted answer that is.
Ah, been there done that too. The problem with this is that I am using T-SQL Stored Procedures that are called by Asp.net Where would I put the CREATE VIEW statement? I can't add it to the function file.