I am doing a bulk delete on a set of ID's as a string sent to a stored procedure separated by commas. I have a function that splits these into a table so I can compare to them. I sometimes get a deadlock on this SP even though I have SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;. Is there a better way to do a bulk delete then this with a SP for performance and no deadlocks?
DELETE FROM Game
WHERE Id IN (
SELECT g.Id
FROM Game g
INNER JOIN [EventGame] eg ON g.Id = eg.Id
INNER JOIN MemberEvent me ON me.EventId = eg.EventId
WHERE
eg.EventId = #EventId AND
g.Id IN (SELECT * FROM dbo.Split(#DeletedGameIds, ',')) AND
(g.[Type] = 1 OR g.[Type] IS NULL) AND
me.MemberId = #MemberId
)
Better way to delete is by identifying the list of id's to be deleted and then do a clustered index delete only thru id's.
CREATE Table #temp (Id Int)
Insert into #temp (Id)
SELECT Id FROM Game
WHERE Id IN (
SELECT g.Id
FROM Game g
INNER JOIN [EventGame] eg ON g.Id = eg.Id
INNER JOIN MemberEvent me ON me.EventId = eg.EventId
WHERE
eg.EventId = #EventId AND
g.Id IN (SELECT * FROM dbo.Split(#DeletedGameIds, ',')) AND
(g.[Type] = 1 OR g.[Type] IS NULL) AND
me.MemberId = #MemberId
)
--you can do this delete in looping with batch size of 50000 records and use checkpoint
DELETE G
FROM Game G
Inner Join #temp t
ON G.ID = t.Id
--checkpoint
Related
The following query groups Snippets by ChannelId and returns an UnreadSnippetCount.
To determine the UnreadSnippetCount, Channel is joined onto ChannelUsers to fetch the date that the User last read the Channel and it uses this LastReadDate to limit the count to rows where the snippet was created after the user last read the channel.
SELECT c.Id, COUNT(s.Id) as [UnreadSnippetCount]
FROM Channels c
INNER JOIN ChannelUsers cu
ON cu.ChannelId = c.Id
LEFT JOIN Snippets s
ON cu.ChannelId = s.ChannelId
AND s.CreatedByUserId <> #UserId
WHERE cu.UserId = #UserId
AND (cu.LastReadDate IS NULL OR s.CreatedDate > cu.LastReadDate)
AND c.Id IN (select value from STRING_SPLIT(#ChannelIds, ','))
GROUP BY c.Id
The query works well logically but for Channels that have a large number of Snippets (97691), the query can take 10 minutes or more to return.
The following index is created:
CREATE NONCLUSTERED INDEX [IX_Snippets_CreatedDate] ON [dbo].[Snippets]
(
[CreatedDate] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = OFF, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
GO
Update:
Query execution plan (original query):
https://www.brentozar.com/pastetheplan/?id=B19sI105F
Update 2
Moving the where clause into the join as suggested:
SELECT c.Id, COUNT(s.Id) as [UnreadSnippetCount]
FROM Channels c
INNER JOIN ChannelUsers cu
ON cu.ChannelId = c.Id
LEFT JOIN Snippets s
ON cu.ChannelId = s.ChannelId
AND s.CreatedByUserId <> #UserId
AND s.CreatedDate > cu.LastReadDate
WHERE cu.UserId = #UserId
AND c.Id IN (select value from STRING_SPLIT(#ChannelIds, ',')
Produces this execution plan:
https://www.brentozar.com/pastetheplan/?id=HkqwFk0ct
Is there a better date comparison method I can use?
Update 3 - Solution
Index
CREATE NONCLUSTERED INDEX [IX_Snippet_Created] ON [dbo].[Snippets]
(ChannelId ASC, CreatedDate ASC) INCLUDE (CreatedByUserId);
Stored Proc
ALTER PROCEDURE [dbo].[GetUnreadSnippetCounts2]
(
#ChannelIds ChannelIdsType READONLY,
#UserId nvarchar(36)
)
AS
SET NOCOUNT ON
SELECT
c.Id,
COUNT(s.Id) as [UnreadSnippetCount]
FROM Channels c
JOIN #ChannelIds cid
ON cid.Id = c.Id
INNER JOIN ChannelUsers cu
ON cu.ChannelId = c.Id
AND cu.UserId = #UserId
JOIN Snippets s
ON cu.ChannelId = s.ChannelId
AND s.CreatedByUserId <> #UserId
AND (cu.LastReadDate IS NULL OR s.CreatedDate > cu.LastReadDate)
GROUP BY c.Id;
This gives the correct results logically and returns quickly.
Resulting execution plan:
https://www.brentozar.com/pastetheplan/?id=S1GwRCCcK
There are a number of inefficiencies I can see in the query plan.
Using STRING_SPLIT means the compiler does not know how many values are being returned, or that they are unique, and the data type is mismatched. Ideally you would pass in a Table valued Parameter, however if you cannot do so then another solution is to dump them into a table variable
DECLARE #tmp TABLE (Id int PRIMARY KEY);
INSERT #tmp (Id)
select value
from STRING_SPLIT(#ChannelIds, ',')
You need better indexing on Snippets. I would suggest the following
CREATE NONCLUSTERED INDEX [IX_Snippet_Created] ON [dbo].[Snippets]
(ChannelId ASC, CreatedDate ASC) INCLUDE (CreatedByUserId);
It doesn't make sense to place CreatedByUserId in the key, because it's an inequality. Keep it in the INCLUDE
As you have already been told, it's better if you move the conditions (for left-joined tables) to the ON clause. I don't know if you then still need the cu.LastReadDate IS NULL check, I've left it in.
I must say, I'm unclear your schema, but INNER JOIN ChannelUsers cu feels wrong here, perhaps it should be a LEFT JOIN? I cannot say further without seeing your full setup and required output.
SELECT
c.Id,
COUNT(s.Id) as [UnreadSnippetCount]
FROM Channels c
JOIN #tmp t
ON t.Id = c.Id
INNER JOIN ChannelUsers cu
ON cu.ChannelId = c.Id
AND cu.UserId = #UserId
LEFT JOIN Snippets s
ON cu.ChannelId = s.ChannelId
AND s.CreatedByUserId <> #UserId
AND (cu.LastReadDate IS NULL OR s.CreatedDate > cu.LastReadDate)
GROUP BY c.Id;
A select query with join returns results in less than 1 sec(returns 1000 rows: 600 ms) but insert into temp table or physical table takes 15-16 seconds.
tested io performance : a select or insert into without any joins takes sub-second to write 1000 rows.
tried trace flag 1118
tried adding clustered index on the temp table and do insert into with tablock and maxdop hint.
None of these improved performance.
Thanks for all your comments. 6000 to 20000 rows that need to be inserted every 5 seconds from Kafka...
I get the data from kafka into sql server using table type variable
Pass it as a parameter to stored procedure
Load this data joining with other tables into a temporary table #table
I use the #table to merge the data into application table
Found a workaround that helps me achieve the target turnaround time but dont exactly know the reason for the behaviour. As I mentioned in the problem statement, the bottleneck was writing the resultset of the select statement that joins the table variable with various other tables to the temp table.
I put this into a stored prod and returned the execution of the stored proc to a temp table. Now the insert takes less than 1 sec
SELECT
i.Id AS IId,
df.Id AS dfid,
MAX(CASE
WHEN lp.Value IS NULL and f.pnp = 1 THEN 0
WHEN lp.Value = 0 and f.tzan = 1 and f.pnp = 0 THEN NULL
ELSE lp.Value
END) 'FV',
MAX(lp.TS),
MAX(lp.Eid),
MAX(0+lp.IsDelayedStream)
FROM
f1 f WITH (NOLOCK)
INNER JOIN ft1 ft WITH (NOLOCK) ON f.FeedTypeId = ft.Id
INNER JOIN FeedDataField fdf WITH (NOLOCK)
ON fdf.FeedId = f.Id
INNER JOIN df1 df WITH (NOLOCK)
ON fdf.dfId = df.Id
INNER JOIN ds1 ds WITH (NOLOCK)
ON df.dsid = ds.Id
INNER JOIN dp1 dp WITH (NOLOCK)
ON ds.dpId = dp.Id
INNER JOIN dc1 dc WITH (NOLOCK)
ON dc.dcId = ds.dcId
INNER JOIN i1 i WITH (NOLOCK)
ON f.iId = I.Id
INNER JOIN id1 id WITH (NOLOCK)
ON id.iId = i.Id
INNER JOIN IdentifierType it WITH (NOLOCK)
ON id.ItId = it.Id
INNER JOIN ivw_Tdf tdf WITH(NOEXPAND)
ON tdf.iId = i.Id
INNER JOIN z.dbo.[tlp] lp
ON lp.Ticker = id.Name AND lp.Field = df.SourceName AND
lp.Contributor = dc.Name AND lp.YellowKey = tdf.TextValue
WHERE
ft.Name in ('X', 'Y') AND f.SA = 1
AND dp.Name = 'B' AND (i.Inactive IS NULL OR i.Inactive = 0)
AND it.Name = 'T' AND id.ValidTo = #InfinityDate
AND tdf.SourceName = 'MSD'
AND tdf.ValidTo = #Infinity
GROUP BY i.Id, df.Id
OPTION(MAXDOP 4, OPTIMIZE FOR (#Infinity = '9999-12-31 23:59:59',
#InfinityDate = '9999-12-31))
I've try following SQL and it was slow when I useing declare #NewCityJudge table and join it, but it was fast when I convert table into real number and join it.
-- input id into #NewCityJudge, only one record
declare #NewCityJudge table(CountryId int)
insert into #NewCityJudge
select CountryId from ....
SELECT TOP (300) *
MyTable as b
join ComponentLanguageIndex as c on c.id = b.[key]
join ComponentCountryTags e on c.ComponentId = e.ComponentId
join #NewCityJudge as d on d.CountryId = e.CountryId -- join #NewCityJudge here
But it faster when using
SELECT TOP (300) *
MyTable as b
join ComponentLanguageIndex as c on c.id = b.[key]
join ComponentCountryTags e on c.ComponentId = e.ComponentId
where CountryId in (39)
The #NewCityJudge always less 5 records.
The first way takes 5 seconds,
The second way takes 500 ms.
Thanks
PS. It was fast when using #NewCityJudge temp table, but I afraid it cause some transaction issue
As opposed to a Join, you could use the following:
SELECT TOP (300) *
MyTable as b
join ComponentLanguageIndex as c on c.id = b.[key]
join ComponentCountryTags e on c.ComponentId = e.ComponentId
where e.CountryId IN (Select CountryId from #NewCityJudge)
Anytime you use TOP (#) on a table with some joins, it is less than ideal. I would also like to understand why TOP (300) ? Is this for testing purposes ? How many records are in MyTable ? If you remove TOP (300), you may find your query time issue resolved.
I solved this problem by using temp table instead of parameter.
Create table #NewCityJudge(CountryId int) insert into #NewCityJudge select CountryId from ....
SELECT TOP (300) * MyTable as b
join ComponentLanguageIndex as c on c.id = b.[key]
join ComponentCountryTags e on c.ComponentId = e.ComponentId
join #NewCityJudge as d on d.CountryId = e.CountryId
I have a hard time with query optimization, currently I'm very close to the point of database redesign. And the stackoverflow is my last hope. I don't think that just showing you the query is enough so I've linked not only database script but also attached database backup in case you don't want to generate the data by hand
Here you can find both the script and the backup
The problems start when you try to do the following...
exec LockBranches #count=64,#lockedBy='034C0396-5C34-4DDA-8AD5-7E43B373AE5A',#lockedOn='2011-07-01 01:29:43.863',#unlockOn='2011-07-01 01:32:43.863'
The main problems occur in this part:
UPDATE B
SET B.LockedBy = #lockedBy,
B.LockedOn = #lockedOn,
B.UnlockOn = #unlockOn,
B.Complete = 1
FROM
(
SELECT TOP (#count) B.LockedBy, B.LockedOn, B.UnlockOn, B.Complete
FROM Objectives AS O
INNER JOIN Generations AS G ON G.ObjectiveID = O.ID
INNER JOIN Branches AS B ON B.GenerationID = G.ID
INNER JOIN
(
SELECT SB.BranchID AS BranchID, SUM(X.SuitableProbes) AS SuitableProbes
FROM SpicieBranches AS SB
INNER JOIN Probes AS P ON P.SpicieID = SB.SpicieID
INNER JOIN
(
SELECT P.ID, 1 AS SuitableProbes
FROM Probes AS P
/* ----> */ INNER JOIN Results AS R ON P.ID = R.ProbeID /* SSMS Estimated execution plan says this operation is the roughest */
GROUP BY P.ID
HAVING COUNT(R.ID) > 0
) AS X ON P.ID = X.ID
GROUP BY SB.BranchID
) AS X ON X.BranchID = B.ID
WHERE
(O.Active = 1)
AND (B.Sealed = 0)
AND (B.GenerationNo < O.BranchGenerations)
AND (B.LockedBy IS NULL OR DATEDIFF(SECOND, B.UnlockOn, GETDATE()) > 0)
AND (B.Complete = 1 OR X.SuitableProbes = O.BranchSize * O.EstimateCount * O.ProbeCount)
) AS B
EDIT: Here are the amounts of rows in each table:
Spicies 71536
Results 10240
Probes 10240
SpicieBranches 4096
Branches 256
Estimates 5
Generations 1
Versions 1
Objectives 1
Somebody else might be able to explain better than I can why this is much quicker. Experience tells me when you have a bunch of queries that collectively run slow together but should be quick in their individual parts then its worth trying a temporary table.
This is much quicker
ALTER PROCEDURE LockBranches
-- Add the parameters for the stored procedure here
#count INT,
#lockedOn DATETIME,
#unlockOn DATETIME,
#lockedBy UNIQUEIDENTIFIER
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
--Create Temp Table
SELECT SpicieBranches.BranchID AS BranchID, SUM(X.SuitableProbes) AS SuitableProbes
INTO #BranchSuitableProbeCount
FROM SpicieBranches
INNER JOIN Probes AS P ON P.SpicieID = SpicieBranches.SpicieID
INNER JOIN
(
SELECT P.ID, 1 AS SuitableProbes
FROM Probes AS P
INNER JOIN Results AS R ON P.ID = R.ProbeID
GROUP BY P.ID
HAVING COUNT(R.ID) > 0
) AS X ON P.ID = X.ID
GROUP BY SpicieBranches.BranchID
UPDATE B SET
B.LockedBy = #lockedBy,
B.LockedOn = #lockedOn,
B.UnlockOn = #unlockOn,
B.Complete = 1
FROM
(
SELECT TOP (#count) Branches.LockedBy, Branches.LockedOn, Branches.UnlockOn, Branches.Complete
FROM Objectives
INNER JOIN Generations ON Generations.ObjectiveID = Objectives.ID
INNER JOIN Branches ON Branches.GenerationID = Generations.ID
INNER JOIN #BranchSuitableProbeCount ON Branches.ID = #BranchSuitableProbeCount.BranchID
WHERE
(Objectives.Active = 1)
AND (Branches.Sealed = 0)
AND (Branches.GenerationNo < Objectives.BranchGenerations)
AND (Branches.LockedBy IS NULL OR DATEDIFF(SECOND, Branches.UnlockOn, GETDATE()) > 0)
AND (Branches.Complete = 1 OR #BranchSuitableProbeCount.SuitableProbes = Objectives.BranchSize * Objectives.EstimateCount * Objectives.ProbeCount)
) AS B
END
This is much quicker with an average execution time of 54ms compared to 6 seconds with the original one.
EDIT
Had a look and combined my ideas with those from RBarryYoung's solution. If you use the following to create the temporary table
SELECT SB.BranchID AS BranchID, COUNT(*) AS SuitableProbes
INTO #BranchSuitableProbeCount
FROM SpicieBranches AS SB
INNER JOIN Probes AS P ON P.SpicieID = SB.SpicieID
WHERE EXISTS(SELECT * FROM Results AS R WHERE R.ProbeID = P.ID)
GROUP BY SB.BranchID
then you can get this down to 15ms which is 400x better than we started with. Looking at the execution plan shows that there is a table scan happening on the temp table. Normally you avoid table scans as best you can but for 128 rows (in this case) it is quicker than whatever it was doing before.
This is basically a complete guess here, but in times past I've found that joining onto the results of a sub-query can be horrifically slow. That is, the subquery was being evaluated way too many times when it really didn't need to.
The way around this was to move the subqueries into CTEs and to join onto those instead. Good luck!
It appears the join on the two uniqueidentifier columns are the source of the problem. One is a clustered index, the other non-clustered on the (FK table). Good that there are indexes on them. Unfortunately guids are notoriously poor performing when joining with large numbers of rows.
As troubleshooting steps:
what state are the indexes in? When was the last time the statistics were updated?
how performant is that subquery onto itself, when executed adhoc? i.e. when you run this statement by itself, how fast does the resultset return? acceptable?
after rebuilding the 2 indexes, and updating statistics, is there any measurable difference?
SELECT P.ID, 1 AS SuitableProbes FROM Probes AS P
INNER JOIN Results AS R ON P.ID = R.ProbeID
GROUP BY P.ID HAVING COUNT(R.ID) > 0
The following runs about 15x faster on my system:
UPDATE B
SET B.LockedBy = #lockedBy,
B.LockedOn = #lockedOn,
B.UnlockOn = #unlockOn,
B.Complete = 1
FROM
(
SELECT TOP (#count) B.LockedBy, B.LockedOn, B.UnlockOn, B.Complete
FROM Objectives AS O
INNER JOIN Generations AS G ON G.ObjectiveID = O.ID
INNER JOIN Branches AS B ON B.GenerationID = G.ID
INNER JOIN
(
SELECT SB.BranchID AS BranchID, COUNT(*) AS SuitableProbes
FROM SpicieBranches AS SB
INNER JOIN Probes AS P ON P.SpicieID = SB.SpicieID
WHERE EXISTS(SELECT * FROM Results AS R WHERE R.ProbeID = P.ID)
GROUP BY SB.BranchID
) AS X ON X.BranchID = B.ID
WHERE
(O.Active = 1)
AND (B.Sealed = 0)
AND (B.GenerationNo < O.BranchGenerations)
AND (B.LockedBy IS NULL OR DATEDIFF(SECOND, B.UnlockOn, GETDATE()) > 0)
AND (B.Complete = 1 OR X.SuitableProbes = O.BranchSize * O.EstimateCount * O.ProbeCount)
) AS B
Insertion of sub query into local temporary table
SELECT SB.BranchID AS BranchID, SUM(X.SuitableProbes) AS SuitableProbes
into #temp FROM SpicieBranches AS SB
INNER JOIN Probes AS P ON P.SpicieID = SB.SpicieID
INNER JOIN
(
SELECT P.ID, 1 AS SuitableProbes
FROM Probes AS P
/* ----> */ INNER JOIN Results AS R ON P.ID = R.ProbeID /* SSMS Estimated execution plan says this operation is the roughest */
GROUP BY P.ID
HAVING COUNT(R.ID) > 0
) AS X ON P.ID = X.ID
GROUP BY SB.BranchID
The below query shows the partial joins with the corresponding table instead of complete!!
UPDATE B
SET B.LockedBy = #lockedBy,
B.LockedOn = #lockedOn,
B.UnlockOn = #unlockOn,
B.Complete = 1
FROM
(
SELECT TOP (#count) B.LockedBy, B.LockedOn, B.UnlockOn, B.Complete
From
(
SELECT ID, BranchGenerations, (BranchSize * EstimateCount * ProbeCount) as MultipliedFactor
FROM Objectives AS O WHERE (O.Active = 1)
)O
INNER JOIN Generations AS G ON G.ObjectiveID = O.ID
Inner Join
(
Select Sealed, GenerationNo, LockedBy, UnlockOn, ID, Complete
From Branches
Where B.Sealed = 0 AND (B.LockedBy IS NULL OR DATEDIFF(SECOND, B.UnlockOn, GETDATE()) > 0)
)B ON B.GenerationID = G.ID
INNER JOIN
(
Select * from #temp
) AS X ON X.BranchID = B.ID
WHERE
AND (B.GenerationNo < O.BranchGenerations)
AND (B.Complete = 1 OR X.SuitableProbes = O.MultipliedFactor)
) AS B
I'm doing a INNER JOIN across 6 tables for a module for my application, basically it takes the prostaff an gets their name and email, and compares that to a list of job applicants.
On the mod_employmentAppJobs I need to set a column for each row selected to true. Basically this sets a flag that tells the sql not to select the column because we already sent an email on that user. It's a bit field.
How do I set the emailSent field to true on a SQL statement? -- Coldfusion 8 is the Application server, just FYI....
SELECT *
FROM pro_Profile p
INNER JOIN pro_Email e ON p.profileID = e.profileID
INNER JOIN mod_userStatus m ON p.profileID = m.profileID
<!--- Joins the pro staff profiles to the employment app --->
INNER JOIN mod_employmentAppJobTitles a ON p.profileID = a.departmentID
<!--- Join Job titles to the jobs --->
INNER JOIN mod_employmentAppJobs b ON a.jobTitleID=b.jobTitleID
<!--- Joining the table on where the profile equals everything else --->
INNER JOIN mod_employmentAppProfile c ON c.eAppID = b.eAppID
WHERE b.emailSent = 'False'
You have a couple of choices.
1) Use a temp table and select data there first, update the mod_employmentAppJobs, and perform a select from the temp table to get your data retrieved.
So, it would look something like this.
create a temp table
CREATE TABLE #tmpTABLE
(
EmailAddress varchar(100),
JobTitle varchar(50),
JobTitleId bigint
......
)
Insert into it
INSERT INTO #tmpTable
SELECT EmailAddress,JobTitle, ........
FROM pro_Profile p
INNER JOIN pro_Email e
ON p.profileID = e.profileID
INNER JOIN mod_userStatus m
ON p.profileID = m.profileID
<!--- Joins the pro staff profiles to the employment app --->
INNER JOIN mod_employmentAppJobTitles a
ON p.profileID = a.departmentID
INNER JOIN mod_employmentAppJobs b
<!--- Join Job titles to the jobs --->
ON a.jobTitleID=b.jobTitleID
<!--- Joining the table on where the profile equals everything else --->
INNER JOIN mod_employmentAppProfile c
ON c.eAppID = b.eAppID
WHERE b.emailSent = 'False'
Update the source table (I'd recommend an index on jobTitleId in the temp table for performance if applicable)
UPDATE mod_employmentAddJobs
SET EmailSent="true"
FROM mod_employmentAppJobs b
INNER JOIN #tmpTable tmp
ON b.jobTitleID=tmp.jobTitleID
Get the actual data back to the app layer
SELECT * FROM #tmpTable
For better taste, I recommend sprinkling with BEGIN TRAN...COMMIT...ROLLBACK and BEGIN TRY..END TRY BEGIN CATCH...END CATCH to taste and to business requirements.
Also, it's good manners to drop the temp table after you are done with it, even though SQL server will not take offense if you don't.
2) You can use the OUTPUT clause of the update statement.
UPDATE mod_employmentAddJobs
SET EmailSent="true"
FROM pro_Profile p
INNER JOIN pro_Email e
ON p.profileID = e.profileID
INNER JOIN mod_userStatus m
ON p.profileID = m.profileID
<!--- Joins the pro staff profiles to the employment app --->
INNER JOIN mod_employmentAppJobTitles a
ON p.profileID = a.departmentID
INNER JOIN mod_employmentAppJobs b
<!--- Join Job titles to the jobs --->
ON a.jobTitleID=b.jobTitleID
<!--- Joining the table on where the profile equals everything else --->
INNER JOIN mod_employmentAppProfile c
ON c.eAppID = b.eAppID
WHERE b.emailSent = 'False'
OUTPUT inserted.*
This should get you the resultset right back to your app layer
You could store the result in a temporary table. You can use the data in the temporary table, and still return it at the end. It's a lot of typing but pretty straightforward:
-- Select data into temporary table
declare #result table (jobTitleID int, ...)
INSERT #result
(jobTitleID, ...)
SELECT jobTitleID
FROM pro_Profile p
...
-- Update unreadMail flag
update mod_employmentAppJobs
set unreadMail = 'False'
where jobTitleID in (select jobTitleId from #result)
-- Return result to application
select jobTitleId, ...
from #result
If you need to update and then return the data then probably the best is to use a Stored Procedure, where you first query the data, update it and then return it.
Well, this is just an idea, not the best solution.
You could get data (for which bit is false) in a DataReader and then execute a Command setting bit to true for ids contained in Dataset.
Dataset returns data you need...
I just tacked a on the end and it worked:
UPDATE mod_employmentAppJobs
SET emailSent = 'True'