Why is a UDF so much slower than a subquery? - sql

I have a case where I need to translate (lookup) several values from the same table. The first way I wrote it, was using subqueries:
SELECT
(SELECT id FROM user WHERE user_pk = created_by) AS creator,
(SELECT id FROM user WHERE user_pk = updated_by) AS updater,
(SELECT id FROM user WHERE user_pk = owned_by) AS owner,
[name]
FROM asset
As I'm using this subquery a lot (that is, I have about 50 tables with these fields), and I might need to add some more code to the subquery (for example, "AND active = 1" ) I thought I'd put these into a user-defined function UDF and use that. But the performance using that UDF was abysmal.
CREATE FUNCTION dbo.get_user ( #user_pk INT )
RETURNS INT
AS BEGIN
RETURN ( SELECT id
FROM ice.dbo.[user]
WHERE user_pk = #user_pk )
END
SELECT dbo.get_user(created_by) as creator, [name]
FROM asset
The performance of #1 is less than 1 second. Performance of #2 is about 30 seconds...
Why, or more importantly, is there any way I can code in SQL server 2008, so that I don't have to use so many subqueries?
Edit:
Just a litte more explanation of when this is useful. This simple query (that is, get userid) gets a lot more complex when I want to have a text for a user, since I have to join with profile to get the language, with a company to see if the language should be fetch'ed from there instead, and with the translation table to get the translated text. And for most of these queries, performance is a secondary issue to readability and maintainability.

The UDF is a black box to the query optimiser so it's executed for every row.
You are doing a row-by-row cursor. For each row in an asset, look up an id three times in another table. This happens when you use scalar or multi-statement UDFs (In-line UDFs are simply macros that expand into the outer query)
One of many articles on the problem is "Scalar functions, inlining, and performance: An entertaining title for a boring post".
The sub-queries can be optimised to correlate and avoid the row-by-row operations.
What you really want is this:
SELECT
uc.id AS creator,
uu.id AS updater,
uo.id AS owner,
a.[name]
FROM
asset a
JOIN
user uc ON uc.user_pk = a.created_by
JOIN
user uu ON uu.user_pk = a.updated_by
JOIN
user uo ON uo.user_pk = a.owned_by
Update Feb 2019
SQL Server 2019 starts to fix this problem.

As other posters have suggested, using joins will definitely give you the best overall performance.
However, since you've stated that that you don't want the headache of maintaining 50-ish similar joins or subqueries, try using an inline table-valued function as follows:
CREATE FUNCTION dbo.get_user_inline (#user_pk INT)
RETURNS TABLE AS
RETURN
(
SELECT TOP 1 id
FROM ice.dbo.[user]
WHERE user_pk = #user_pk
-- AND active = 1
)
Your original query would then become something like:
SELECT
(SELECT TOP 1 id FROM dbo.get_user_inline(created_by)) AS creator,
(SELECT TOP 1 id FROM dbo.get_user_inline(updated_by)) AS updater,
(SELECT TOP 1 id FROM dbo.get_user_inline(owned_by)) AS owner,
[name]
FROM asset
An inline table-valued function should have better performance than either a scalar function or a multistatement table-valued function.
The performance should be roughly equivalent to your original query, but any future changes can be made in the UDF, making it much more maintainable.

To get the same result (NULL if user is deleted or not active).
select
u1.id as creator,
u2.id as updater,
u3.id as owner,
[a.name]
FROM asset a
LEFT JOIN user u1 ON (u1.user_pk = a.created_by AND u1.active=1)
LEFT JOIN user u2 ON (u2.user_pk = a.created_by AND u2.active=1)
LEFT JOIN user u3 ON (u3.user_pk = a.created_by AND u3.active=1)

Am I missing something? Why can't this work? You are only selecting the id which you already have in the table:
select created_by as creator, updated_by as updater,
owned_by as owner, [name]
from asset
By the way, in designing you really should avoid keywords, like name, as field names.

Related

how to optimize SQL sub queries?

This my scenario I tried to search a record in the SQL table using the name. So, I tried to create a subquery and I used like operator also in Postgres. SQL query It's working fine. but it's taking so much time. So, I checked why it's taking so much time. the reason is the subquery. In the subquery it hitting all the records in the table. How to optimize subquery.
SQL QUery
SELECT
id, latitude,longitude,first_name,last_name,
contact_company_id,address,address2,city,state_id, zip,country_id,default_phone_id,last_contacted,image,contact_type_id
FROM contact
WHERE company_id = 001
AND contact_company_id IN (select id from contactcompany where lower( name ) ~*'jack')
So, I tried to run this query it's taking 2 seconds and it hit all records in the contact company table that only It's takes time.
How to optimize subquery using SQL?
Please try a sub query as a inner join with a main table, both query give same result.
Example here :
SELECT contact.id,
contact.latitude,
contact.longitude,
contact.first_name,
contact.last_name,
contact.contact_company_id,
contact.address,
contact.address2,
contact.city,
contact.state_id,
contact.zip,
contact.country_id,
contact.default_phone_id,
contact.last_contacted,
contact.image,
contact.contact_type_id
FROM contact As contact
Inner Join contactcompany As contactcompany On contactcompany.id = contact_company_id
WHERE company_id = 001
AND lower( name ) ~*'jack'
I would start by writing the query using exists. Then, company_id is either a string or a number. Let met guess that it is a string, because the constant is represented with leading zeros. If so, use single quotes:
SELECT c.*
FROM contact c
WHERE company_id = '001' AND
EXISTS (SELECT 1
FROM contactcompany cc
WHERE cc.name ~* 'jack' AND
cc.id = c.contact_company_id
);
Then an index on contact(compnay_id, contact_company_id) makes sense. And for the subquery, contactcompany(id, name).
There may be other alternatives for writing the query, but your question has not provided much information on table sizes, current performance, or the data types.

Bad performance when not selecting a specific column from a view

Using SQL Server 2016 SP1. I have a view Users that goes like
SELECT
ROW_NUMBER() OVER (ORDER BY ID) AS DataModelID, *
FROM
(Some query) AS tbl
I then select from it
SELECT
U1.ID UserId, U1.IdentityNumber IdentityNumber,
U1.ArabicFirstName, U1.ArabicSecondName
FROM
USERS U1
LEFT JOIN
USERS U2 ON U1.IdentityNumber = U2.IdentityNumber
AND U1.ID <> U2.ID
AND U1.RoleId = 2
WHERE
U2.ID IS NOT NULL
AND U1.IdentityNumber <> ''
AND PATINDEX('[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]', U1.IdentityNumber) = 1
The thing here is with the above query when selecting * or include column DataModelID it runs in 3 secs but when selecting any columns without this one it runs in more than 2 mins.
Why is this happening, running faster when including a column?
I tried everything for the cash to clear it and run multiple times and it has the same results
Without seeing the actual execution plan there is no way to say for sure but as #mvisser mentioned - the likely cause is that the optimizer is choosing a better index when you do a
SELECT * or include column DataModelID than when you don't. There are a number of solutions here, one suggestion would be to look at the execution plan for the queries that run in 3 seconds, note what index is being used and use an index hint (see section G) to force the optimizer to use that index in your queries that don't reference those columns. I would not suggest this though - there are too many unanswered variables to consider this a viable option.
Here's what I recommend:
First, as #Lukasz Szozda mentioned, this is not SARGable:
AND PATINDEX( '[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]',U1.IdentityNumber) = 1
But this is:
U1.IdentityNumber LIKE '[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
So I'd fix that first. Next, the fastest, most sure-fire way to resolve this is to simply include DataModelID in your queries even if you don't need them. You can either filter that column out at the application level or create a stored proc that populates a temp table then, for the final result set, you can retrieve your results from that temp table excluding DataModelID.
OPTION #2
You can create an Indexed View on you USERS table that looks something like this:
CREATE VIEW dbo.vwUSERS_clean
WITH SCHEMABINDING AS
SELECT U1.ID, UserId, U1.IdentityNumber IdentityNumber,
U1.ArabicFirstName, U1.ArabicSecondName, DataModelID, U2.IdentityNumber
FROM USERS U1
WHERE U2.ID IS NOT NULL
AND U1.IdentityNumber <> ''
AND U1.IdentityNumber LIKE '[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]';
GO
Then create a unique, clustered index on it. Next you would change the query that you posted to reference your indexed view (e.g. change both references to USERS to dbo.vwUSERS_clean WITH (NOEXPAND)).
Note that ROW_NUMBER is not allowed in indexed views but, if you make ID your clustered index (or the first column in a composite clustered index) then there will be no cost to Adding ROW_NUMBER() OVER ORDER BY ID to queries that reference that Indexed view.

Alternative for joining two tables multiple times

I have a situation where I have to join a table multiple times. Most of them need to be left joins, since some of the values are not available. How to overcome the query poor performance when joining multiple times?
The Scenario
Tables
[Project]: ProjectId Guid, Name VARCHAR(MAX).
[UDF]: EntityId Guid, EntityType Char(1), UDFCode Guid, UDFName varchar(20)
[UDFDetail]: UDFCode Guid, Description VARCHAR(MAX)
Relationship:
[Project].ProjectId - [UDF].EntityId
[UDFDetail].UDFCode - [UDF].UDFCode
The UDF table holds custom fields for projects, based on the UDFName column. The value for these fields, however, is stored on the UDFDetail, in the column Description.
I have lots of custom columns for Project, and they are stored in the UDF table.
So for example, to get two fields for the project I do the following select:
SELECT
p.Name ProjectName,
ud1.Description Field1,
ud1.UDFCode Field1Id,
ud2.Description Field2,
ud2.UDFCode Field2Id
FROM
Project p
LEFT JOIN UDF u1 ON
u1.EntityId = p.ProjectId AND u1.ItemName='Field1'
LEFT JOIN UDFDetail ud1 ON
ud1.UDFCode = u1.UDFCode
LEFT JOIN UDF u2 ON
u2.EntityId = p.ProjectId AND u2.ItemName='Field2'
LEFT JOIN UDFDetail ud2 ON
ud2.UDFCode = u2.UDFCode
The Problem
Imagine the above select but joining with like 15 fields. In my query I have around 10 fields already and the performance is not very good. It is taking about 20 seconds to run. I have good indexes for these tables, so looking at the execution plan, it is doing only index seeks without any lookups. Regarding the joins, it needs to be left join, because Field 1 might not exist for that specific project.
The Question
Is there a more performatic way to retrieve the data?
How would you do the query to retrieve 10 different fields for one project in a schema like this?
Your choices are pivot, explicit aggregation (with conditional functions), or the joins. If you have the appropriate indexes set up, the joins may be the fastest method.
The correct index would be UDF(EntityId, ItemName, UdfCode).
You can test if the group by is faster by running a query such as:
SELECT count(*)
FROM p LEFT JOIN
UDF u1
ON u1.EntityId = p.ProjectId LEFT JOIN
UDFDetail ud1
ON ud1.UDFCode = u1.UDFCode;
If this runs fast enough, then you can consider the group by approach.
You can try this very weird contraption (it does not look pretty, but it does a single set of outer joins). The intermediate result is a very "wide" and "long" dataset, which we can then "compact" with aggregation (for example, for each ProjectName, each Field1 column will have N result, N-1 NULLs and 1 non-null result, which is then selecting with a simple MAX aggregation) [N is the number of fields].
select ProjectName, max(Field1) as Field1, max(Field1Id) as Field1Id, max(Field2) as Field2, max(Field2Id) as Field2Id
from (
select
p.Name as ProjectName,
case when u.UDFName='Field1' then ud.Description else NULL end as Field1,
case when u.UDFName='Field1' then ud.UDFCode else NULL end as Field1Id,
case when u.UDFName='Field2' then ud.Description else NULL end as Field2,
case when u.UDFName='Field2' then ud.UDFCode else NULL end as Field2Id
from Project p
left join UDF u on p.ProjectId=u.EntityId
left join UDFDetail ud on u.UDFCode=ud.UDFCode
) tmp
group by ProjectName
The query can actually be rewritten without the inner query, but that should not make a big difference :), and looking at Gordon Linoff's suggestion and your answer, it might actually take just about 20 seconds as well, but it is still worth giving a try.

In an EXISTS can my JOIN ON use a value from the original select

I have an order system. Users with can be attached to different orders as a type of different user. They can download documents associated with an order. Documents are only given to certain types of users on the order. I'm having trouble writing the query to check a user's permission to view a document and select the info about the document.
I have the following tables and (applicable) fields:
Docs: DocNo, FileNo
DocAccess: DocNo, UserTypeWithAccess
FileUsers: FileNo, UserType, UserNo
I have the following query:
SELECT Docs.*
FROM Docs
WHERE DocNo = 1000
AND EXISTS (
SELECT * FROM DocAccess
LEFT JOIN FileUsers
ON FileUsers.UserType = DocAccess.UserTypeWithAccess
AND FileUsers.FileNo = Docs.FileNo /* Errors here */
WHERE DocAccess.UserNo = 2000 )
The trouble is that in the Exists Select, it does not recognize Docs (at Docs.FileNo) as a valid table. If I move the second on argument to the where clause it works, but I would rather limit the initial join rather than filter them out after the fact.
I can get around this a couple ways, but this seems like it would be best. Anything I'm missing here? Or is it simply not allowed?
I think this is a limitation of your database engine. In most databases, docs would be in scope for the entire subquery -- including both the where and in clauses.
However, you do not need to worry about where you put the particular clause. SQL is a descriptive language, not a procedural language. The purpose of SQL is to describe the output. The SQL engine, parser, and compiler should be choosing the most optimal execution path. Not always true. But, move the condition to the where clause and don't worry about it.
I am not clear why do you need to join with FileUsers at all in your subquery?
What is the purpose and idea of the query (in plain English)?
In any case, if you do need to join with FileUsers then I suggest to use the inner join and move second filter to the WHERE condition. I don't think you can use it in JOIN condition in subquery - at least I've never seen it used this way before. I believe you can only correlate through WHERE clause.
You have to use aliases to get this working:
SELECT
doc.*
FROM
Docs doc
WHERE
doc.DocNo = 1000
AND EXISTS (
SELECT
*
FROM
DocAccess acc
LEFT OUTER JOIN
FileUsers usr
ON
usr.UserType = acc.UserTypeWithAccess
AND usr.FileNo = doc.FileNo
WHERE
acc.UserNo = 2000
)
This also makes it more clear which table each field belongs to (think about using the same table twice or more in the same query with different aliases).
If you would only like to limit the output to one row you can use TOP 1:
SELECT TOP 1
doc.*
FROM
Docs doc
INNER JOIN
FileUsers usr
ON
usr.FileNo = doc.FileNo
INNER JOIN
DocAccess acc
ON
acc.UserTypeWithAccess = usr.UserType
WHERE
doc.DocNo = 1000
AND acc.UserNo = 2000
Of course the second query works a bit different than the first one (both JOINS are INNER). Depeding on your data model you might even leave the TOP 1 out of that query.

OR query performance and strategies with Postgresql

In my application I have a table of application events that are used to generate a user-specific feed of application events. Because it is generated using an OR query, I'm concerned about performance of this heavily used query and am wondering if I'm approaching this wrong.
In the application, users can follow both other users and groups. When an action is performed (eg, a new post is created), a feed_item record is created with the actor_id set to the user's id and the subject_id set to the group id in which the action was performed, and actor_type and subject_type are set to the class names of the models. Since users can follow both groups and users, I need to generate a query that checks both the actor_id and subject_id, and it needs to select distinct records to avoid duplicates. Since it's an OR query, I can't use an normal index. And since a record is created every time an action is performed, I expect this table to have a lot of records rather quickly.
Here's the current query (the following table joins users to feeders, aka, users and groups)
SELECT DISTINCT feed_items.* FROM "feed_items"
INNER JOIN "followings"
ON (
(followings.feeder_id = feed_items.subject_id
AND followings.feeder_type = feed_items.subject_type)
OR
(followings.feeder_id = feed_items.actor_id
AND followings.feeder_type = feed_items.actor_type)
)
WHERE (followings.follower_id = 42) ORDER BY feed_items.created_at DESC LIMIT 30 OFFSET 0
So my questions:
Since this is a heavily used query, is there a performance problem here?
Is there any obvious way to simplify or optimize this that I'm missing?
What you have is called an exclusive arc and you're seeing exactly why it's a bad idea. The best approach for this kind of problem is to make the feed item type dynamic:
Feed Items: id, type (A or S for Actor or Subject), subtype (replaces actor_type and subject_type)
and then your query becomes
SELECT DISTINCT fi.*
FROM feed_items fi
JOIN followings f ON f.feeder_id = fi.id AND f.feeder_type = fi.type AND f.feeder_subtype = fi.subtype
or similar.
This may not completely or exactly represent what you need to do but the principle is sound: you need to eliminate the reason for the OR condition by changing your data model in such a way to lend itself to having performant queries being written against it.
Explain analyze and time query to see if there is a problem.
Aso you could try expressing the query as a union
SELECT x.* FROM
(
SELECT feed_items.* FROM feed_items
INNER JOIN followings
ON followings.feeder_id = feed_items.subject_id
AND followings.feeder_type = feed_items.subject_type
WHERE (followings.follower_id = 42)
UNION
SELECT feed_items.* FROM feed_items
INNER JOIN followings
followings.feeder_id = feed_items.actor_id
AND followings.feeder_type = feed_items.actor_type)
WHERE (followings.follower_id = 42)
) AS x
ORDER BY x.created_at DESC
LIMIT 30
But again explain analyze and benchmark.
To find out if there is a performance problem measure it. PostgreSQL can explain it for you.
I don't think that the query needs simplifying, if you identify a performance problem then you may need to revise your indexes.