How to detect query which holds the lock in Postgres? - sql

I want to track mutual locks in postgres constantly.
I came across Locks Monitoring article and tried to run the following query:
SELECT bl.pid AS blocked_pid,
a.usename AS blocked_user,
kl.pid AS blocking_pid,
ka.usename AS blocking_user,
a.query AS blocked_statement
FROM pg_catalog.pg_locks bl
JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid
JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND kl.pid != bl.pid
JOIN pg_catalog.pg_stat_activity ka ON ka.pid = kl.pid
WHERE NOT bl.granted;
Unfortunately, it never returns non-empty result set. If I simplify given query to the following form:
SELECT bl.pid AS blocked_pid,
a.usename AS blocked_user,
a.query AS blocked_statement
FROM pg_catalog.pg_locks bl
JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid
WHERE NOT bl.granted;
then it returns queries which are waiting to acquire a lock. But I cannot manage to change it so that it can return both blocked and blocker queries.
Any ideas?

Since 9.6 this is a lot easier as it introduced the function pg_blocking_pids() to find the sessions that are blocking another session.
So you can use something like this:
select pid,
usename,
pg_blocking_pids(pid) as blocked_by,
query as blocked_query
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0;

From this excellent article on query locks in Postgres, one can get blocked query and blocker query and their information from the following query.
CREATE VIEW lock_monitor AS(
SELECT
COALESCE(blockingl.relation::regclass::text,blockingl.locktype) as locked_item,
now() - blockeda.query_start AS waiting_duration, blockeda.pid AS blocked_pid,
blockeda.query as blocked_query, blockedl.mode as blocked_mode,
blockinga.pid AS blocking_pid, blockinga.query as blocking_query,
blockingl.mode as blocking_mode
FROM pg_catalog.pg_locks blockedl
JOIN pg_stat_activity blockeda ON blockedl.pid = blockeda.pid
JOIN pg_catalog.pg_locks blockingl ON(
( (blockingl.transactionid=blockedl.transactionid) OR
(blockingl.relation=blockedl.relation AND blockingl.locktype=blockedl.locktype)
) AND blockedl.pid != blockingl.pid)
JOIN pg_stat_activity blockinga ON blockingl.pid = blockinga.pid
AND blockinga.datid = blockeda.datid
WHERE NOT blockedl.granted
AND blockinga.datname = current_database()
);
SELECT * from lock_monitor;
As the query is long but useful, the article author has created a view for it to simplify it's usage.

This modification of a_horse_with_no_name's answer will give you the last (or current, if it's still actively running) query of the blocking session in addition to just the blocked sessions:
SELECT
activity.pid,
activity.usename,
activity.query,
blocking.pid AS blocking_id,
blocking.query AS blocking_query
FROM pg_stat_activity AS activity
JOIN pg_stat_activity AS blocking ON blocking.pid = ANY(pg_blocking_pids(activity.pid));
This helps you understand which operations are interfering with each other (even if the block comes from a previous query), helping you understand the impact of killing one session and figuring out how to prevent blocking in the future.

Postgres has a very rich system catalog exposed via SQL tables.
PG's statistics collector is a subsystem that supports collection and reporting of information about server activity.
Now to figure out the blocking PIDs you can simply query pg_stat_activity.
select pg_blocking_pids(pid) as blocked_by
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0;
To, get the query corresponding to the blocking PID, you can self-join or use it as a where clause in a subquery.
SELECT query
FROM pg_stat_activity
WHERE pid IN (select unnest(pg_blocking_pids(pid)) as blocked_by from pg_stat_activity where cardinality(pg_blocking_pids(pid)) > 0);
Note: Since pg_blocking_pids(pid) returns an Integer[], so you need to unnest it before you use it in a WHERE pid IN clause.
Hunting for slow queries can be tedious sometimes, so have patience. Happy hunting.

How to show all blocked queries.
select pid,
usename,
pg_blocking_pids(pid) as blocked_by,
query as blocked_query
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0;
You can kill a blocked query by using the below command.
SELECT pg_cancel_backend(a.pid), pg_terminate_backend(a.pid);
You can terminate all blocked queries using it.
SELECT pg_cancel_backend(a.pid), pg_terminate_backend(a.pid)
FROM( select pid,
usename,
pg_blocking_pids(pid) as blocked_by,
query as blocked_query
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0) a

for postgresql versions earlier than postgresql 9.6 which does not have pg_blocking_pids function,you can use following query to find blocked query and blocking query.
SELECT w.query AS waiting_query,
w.pid AS waiting_pid,
w.usename AS waiting_user,
now() - w.query_start AS waiting_duration,
l.query AS locking_query,
l.pid AS locking_pid,
l.usename AS locking_user,
t.schemaname || '.' || t.relname AS tablename,
now() - l.query_start AS locking_duration
FROM pg_stat_activity w
JOIN pg_locks l1 ON w.pid = l1.pid AND NOT l1.granted
JOIN pg_locks l2 ON l1.relation = l2.relation AND l2.granted
JOIN pg_stat_activity l ON l2.pid = l.pid
JOIN pg_stat_user_tables t ON l1.relation = t.relid
WHERE w.waiting;

One thing I find that is often missing from these is an ability to look up row locks. At least on the larger databases I have worked on, row locks are not shown in pg_locks (if they were, pg_locks would be much, much larger and there isn't a real data type to show the locked row in that view properly).
I don't know that there is a simple solution to this but usually what I do is look at the table where the lock is waiting and search for rows where the xmax is less than the transaction id present there. That usually gives me a place to start, but it is a bit hands-on and not automation friendly.
Note that shows you uncommitted writes on rows on those tables. Once committed, the rows are not visible in the current snapshot. But for large tables, that is a pain.

Others have already answered your query, but I had certain situation: There were lots of queries blocking each other and I wanted to find the main blocker sessions for each blocked session.
Fir example session 5 was blocked by session 4 and 4 was waitig for 3 and 2 while 3 and 2 were waiting for session 1.
5->4->{3,2}->1
In this case session 1 is real problem, once it is cleared, others will take care of themselves.
So I wanted a query which will show me results like this:
Blocked pid
Blocker pid
5
1
4
1
3
1
2
1
So if there is any chain locked sessions, this query will show you the main blocker sessions for each blocked session.
;with recursive
find_the_source_blocker as (
select pid
,pid as blocker_id
from pg_stat_activity pa
where pa.state<>'idle'
and array_length(pg_blocking_pids(pa.pid), 1) is null
union all
select
t.pid as pid
,f.blocker_id as blocker_id
from find_the_source_blocker f
join ( SELECT
act.pid,
blc.pid AS blocker_id
FROM pg_stat_activity AS act
LEFT JOIN pg_stat_activity AS blc ON blc.pid = ANY(pg_blocking_pids(act.pid))
where act.state<>'idle') t on f.pid=t.blocker_id
)
select distinct
s.pid
,s.blocker_id
,pb.usename as blocker_user
,pb.query_start as blocker_start
,pb.query as blocker_query
,pt.query_start as trans_start
,pt.query as trans_query
from find_the_source_blocker s
join pg_stat_activity pb on s.blocker_id=pb.pid
join pg_stat_activity pt on s.pid=pt.pid
where s.pid<>s.blocker_id

Related

SQL Server query issues

We have a SQL Server 2012 database with our live project including multiple tables and records, Basically We are facing a problem with SQL queries on a table where two same SQL queries. The first SQL query is taking less execution time and second SQL query is getting tremendously slow to execute.
I don't know why it is happening someone could help me to solve this problem?
Our two queries are given below....
First query (taking so much time to execute):
SELECT * FROM (SELECT TOP 10 TrackerResponse.EventName,TrackerResponse.ReceiveTime,ISNull(TrackerResponse.InputStatus,0) AS InputStatus,
TrackerResponse.Latitude,TrackerResponse.Longitude,TrackerResponse.Speed,
TrackerResponse.TrackerID,TrackerResponse.OdoMeter,TrackerResponse.Direction,
UserCar.CarNo FROM TrackerResponse
INNER JOIN UserCar ON (UserCar.TrackerID = TrackerResponse.TrackerID)
WHERE (TrackerResponse.EventName IS NOT NULL AND TrackerResponse.EventName<>'')
AND TrackerResponse.TrackerID = 112 Order By ID DESC) AS Events)
Second query (taking less execution time):
SELECT * FROM (SELECT TOP 10 TrackerResponse.EventName,TrackerResponse.ReceiveTime,ISNull(TrackerResponse.InputStatus,0) AS InputStatus,
TrackerResponse.Latitude,TrackerResponse.Longitude,TrackerResponse.Speed,
TrackerResponse.TrackerID,TrackerResponse.OdoMeter,TrackerResponse.Direction,
UserCar.CarNo FROM TrackerResponse
INNER JOIN UserCar ON (UserCar.TrackerID = TrackerResponse.TrackerID)
WHERE (TrackerResponse.EventName IS NOT NULL AND TrackerResponse.EventName<>'')
AND TrackerResponse.TrackerID = 56 Order By ID DESC) AS Events
I see Both queries are similar with an exception of tracker id..so my best guess would be ,this query might be a victim of Parameter sniffing..
Try using Option(Recompile) like below to see if this is the cause and follow article referred in above link for resolution
SELECT * FROM (SELECT TOP 10 TrackerResponse.EventName,TrackerResponse.ReceiveTime,ISNull(TrackerResponse.InputStatus,0) AS InputStatus,
TrackerResponse.Latitude,TrackerResponse.Longitude,TrackerResponse.Speed,
TrackerResponse.TrackerID,TrackerResponse.OdoMeter,TrackerResponse.Direction,
UserCar.CarNo FROM TrackerResponse
INNER JOIN UserCar ON (UserCar.TrackerID = TrackerResponse.TrackerID)
WHERE (TrackerResponse.EventName IS NOT NULL AND TrackerResponse.EventName<>'')
AND TrackerResponse.TrackerID = 56 Order By ID DESC
option(recompile)
AS Events)

How can I optimize this SQL query? (Solarwinds Orion)

I'm very new to SQL, and still learning. I'm using a reporting tool called Solarwinds Orion, and I'm honestly not sure how specific the query I have written is to the program, so if there's anything in the query that's confusing, let me know and I'll try to figure out if it's specific to the program or not.
The problem with the query I'm running is that it times out after a very long time (maybe an hour) of running. The database I'm using is huge. Unfortunately I don't really know how huge, but I've been told it's huge.
Is there anything I am doing wrong that would have a huge performance impact?
SELECT TOP 10000
Nodes.Caption AS NodeName,
NetflowApplicationSummary.AppName AS Application_Name,
SUM(NetflowApplicationSummary.TotalBytes) AS SUM_of_Bytes_Transferred,
AVG(Case OutBandwidth
When 0 Then 0
Else (NetflowApplicationSummary.TotalBytes/OutBandwidth) * 100
End) AS TEST_PERCENT
FROM
((NetflowApplicationSummary
INNER JOIN Nodes ON (NetflowApplicationSummary.NodeID = Nodes.NodeID))
INNER JOIN InterfaceTraffic ON (Nodes.NodeID = InterfaceTraffic.InterfaceID))
INNER JOIN Interfaces ON (Nodes.NodeID = Interfaces.NodeID)
WHERE
( InterfaceTraffic.DateTime > (GetDate()-30) )
AND
(Nodes.WANCircuit = 1)
GROUP BY Nodes.Caption, NetflowApplicationSummary.AppName
EDIT: I ran COUNT() on each of my tables with the below result.
SELECT COUNT(*) FROM NetflowApplicationSummary # 50671011
SELECT COUNT(*) FROM Nodes # 898
SELECT COUNT(*) FROM InterfaceTraffic # 18000166
SELECT COUNT(*) FROM Interfaces # 3938
# Total : 68,676,013
I really have no idea if 68 million items is a huge database to be honest.
A couple of notes:
The INNER JOIN operator is associative, so get rid of those parenthesis in the FROM clause and let the optimizer figure out the best join order.
You may have an implied cursor from the getdate() function being called for every row. Store the value in a local variable and compare to that.
The resulting SQL should look like this:
DECLARE #Date as datetime = getdate() - 30;
SELECT TOP 10000
Nodes.Caption AS NodeName,
NetflowApplicationSummary.AppName AS Application_Name,
SUM(NetflowApplicationSummary.TotalBytes) AS SUM_of_Bytes_Transferred,
AVG(Case OutBandwidth
When 0 Then 0
Else (NetflowApplicationSummary.TotalBytes/OutBandwidth) * 100
End) AS TEST_PERCENT
FROM NetflowApplicationSummary
INNER JOIN Nodes ON NetflowApplicationSummary.NodeID = Nodes.NodeID
INNER JOIN InterfaceTraffic ON Nodes.NodeID = InterfaceTraffic.InterfaceID
INNER JOIN Interfaces ON Nodes.NodeID = Interfaces.NodeID
WHERE InterfaceTraffic.DateTime > #Date
AND Nodes.WANCircuit = 1
GROUP BY Nodes.Caption, NetflowApplicationSummary.AppName
Also, make sure you have an index on table InterfaceTraffic with a leading field of DateTime. If this doesn't exist you may need to pay the penalty of a first time creation of it.
If this doesn't help, then you may need to post the execution plan where it can be inspected.
Out of interest, also perform a count() on all four tables and post that result, just so members here can make their own assessment of how big your database really is. It is amazing how many non-technical people still think a 1 or 10 GB database is huge, while I run that easily on my workstation!

Postgres consistently favoring nested loop join over merge join

I have a complex query:
SELECT DISTINCT ON (delivery.id)
delivery.id, dl_processing.pid
FROM mailer.mailer_message_recipient_rel AS delivery
JOIN mailer.mailer_message AS message ON delivery.message_id = message.id
JOIN mailer.mailer_message_recipient_rel_log AS dl_processing ON dl_processing.rel_id = delivery.id AND dl_processing.status = 1000
-- LEFT JOIN mailer.mailer_recipient AS r ON delivery.email = r.email
JOIN mailer.mailer_mailing AS mailing ON message.mailing_id = mailing.id
WHERE
NOT EXISTS (SELECT dl_finished.id FROM mailer.mailer_message_recipient_rel_log AS dl_finished WHERE dl_finished.rel_id = delivery.id AND dl_finished.status <> 1000) AND
dl_processing.date <= NOW() - (36000 * INTERVAL '1 second') AND
NOT EXISTS (SELECT ml.id FROM mailer.mailer_message_log AS ml WHERE ml.message_id = message.id) AND
-- (r.times_bounced < 5 OR r.times_bounced IS NULL) AND
NOT EXISTS (SELECT ur.id FROM mailer.mailer_unsubscribed_recipient AS ur WHERE ur.email = delivery.email AND ur.list_id = mailing.list_id)
ORDER BY delivery.id, dl_processing.id DESC
LIMIT 1000;
It is running very slowly, and the reason seems to be that Postgres is consistently avoiding using merge joins in its query plan despite me having all the indices that I would need for this. It looks really depressing:
http://explain.depesz.com/s/tVY
http://i.stack.imgur.com/Myw4R.png
Why would this happen? How do I troubleshoot such an issue?
UPD: with #wildplasser's help I have reworked the query to fix performance (while changing its semantics somewhat):
SELECT delivery.id, dl_processing.pid
FROM mailer.mailer_message_recipient_rel AS delivery
JOIN mailer.mailer_message AS message ON delivery.message_id = message.id
JOIN mailer.mailer_message_recipient_rel_log AS dl_processing ON dl_processing.rel_id = delivery.id AND dl_processing.status in (1000, 2, 5) AND dl_processing.date <= NOW() - (36000 * INTERVAL '1 second')
LEFT JOIN mailer.mailer_recipient AS r ON delivery.email = r.email
WHERE
(r.times_bounced < 5 OR r.times_bounced IS NULL) AND
NOT EXISTS (SELECT dl_other.id FROM mailer.mailer_message_recipient_rel_log AS dl_other WHERE dl_other.rel_id = delivery.id AND dl_other.id > dl_processing.id) AND
NOT EXISTS (SELECT ml.id FROM mailer.mailer_message_log AS ml WHERE ml.message_id = message.id) AND
NOT EXISTS (SELECT ur.id FROM mailer.mailer_unsubscribed_recipient AS ur JOIN mailer.mailer_mailing AS mailing ON message.mailing_id = mailing.id WHERE ur.email = delivery.email AND ur.list_id = mailing.list_id)
ORDER BY delivery.id
LIMIT 1000
It now runs well, but the query plan still sports these horrible nested loop joins <_<:
http://explain.depesz.com/s/MTo3
I would still like to know why that is.
The reason is that Postgres is actually doing the right thing, and I suck at math. Suppose table A has N rows, and table B has M rows, and they are being joined via a column that they both have a B-tree index for. Then the following is true:
Nested loop join's time complexity is not O(MN), like I naively thought, but O(M log N) or O(N log M), depending on which table is scanned linearly. If both are scanned by an index, we get O(M log M log N) or O(N log M log N), respectively. But since this is only required if a specific order of the rows is needed for yet another join or due to the ORDER clause, as we'll see it's not a bad deal at all.
Merge join's time complexity is O(M log M + N log N), which means that it loses to the nested loop join, provided that the asymptotic proportionality coefficients are the same, and AFAIK they should both be equal to 1 in most implementations. Since both tables must be iterated by the same index in the same direction, if different order is required, an additional sort is required, which easily makes the complexity worse than in the case of the nested loop sort.
So basically despite being associated with the merge sort, which we all love, merge join almost always sucks.
The reason why my first query was so slow was because it had to perform sort before applying limit, and it was also bad in many other ways. After applying #wildplasser's suggestions, I managed to reduce the number of (still expensive) nested loops and also allow for limit to be taken without a sort, thus ensuring that Postgres most likely won't need to run the outer scan to its completition, which is where I derive the bulk of performance gains from.

Timeout running SQL query

I'm trying to using the aggregation features of the django ORM to run a query on a MSSQL 2008R2 database, but I keep getting a timeout error. The query (generated by django) which fails is below. I've tried running it directs the SQL management studio and it works, but takes 3.5 min
It does look it's aggregating over a bunch of fields which it doesn't need to, but I wouldn't have though that should really cause it to take that long. The database isn't that big either, auth_user has 9 records, ticket_ticket has 1210, and ticket_watchers has 1876. Is there something I'm missing?
SELECT
[auth_user].[id],
[auth_user].[password],
[auth_user].[last_login],
[auth_user].[is_superuser],
[auth_user].[username],
[auth_user].[first_name],
[auth_user].[last_name],
[auth_user].[email],
[auth_user].[is_staff],
[auth_user].[is_active],
[auth_user].[date_joined],
COUNT([tickets_ticket].[id]) AS [tickets_captured__count],
COUNT(T3.[id]) AS [assigned_tickets__count],
COUNT([tickets_ticket_watchers].[ticket_id]) AS [tickets_watched__count]
FROM
[auth_user]
LEFT OUTER JOIN [tickets_ticket] ON ([auth_user].[id] = [tickets_ticket].[capturer_id])
LEFT OUTER JOIN [tickets_ticket] T3 ON ([auth_user].[id] = T3.[responsible_id])
LEFT OUTER JOIN [tickets_ticket_watchers] ON ([auth_user].[id] = [tickets_ticket_watchers].[user_id])
GROUP BY
[auth_user].[id],
[auth_user].[password],
[auth_user].[last_login],
[auth_user].[is_superuser],
[auth_user].[username],
[auth_user].[first_name],
[auth_user].[last_name],
[auth_user].[email],
[auth_user].[is_staff],
[auth_user].[is_active],
[auth_user].[date_joined]
HAVING
(COUNT([tickets_ticket].[id]) > 0 OR COUNT(T3.[id]) > 0 )
EDIT:
Here are the relevant indexes (excluding those not used in the query):
auth_user.id (PK)
auth_user.username (Unique)
tickets_ticket.id (PK)
tickets_ticket.capturer_id
tickets_ticket.responsible_id
tickets_ticket_watchers.id (PK)
tickets_ticket_watchers.user_id
tickets_ticket_watchers.ticket_id
EDIT 2:
After a bit of experimentation, I've found that the following query is the smallest that results in the slow execution:
SELECT
COUNT([tickets_ticket].[id]) AS [tickets_captured__count],
COUNT(T3.[id]) AS [assigned_tickets__count],
COUNT([tickets_ticket_watchers].[ticket_id]) AS [tickets_watched__count]
FROM
[auth_user]
LEFT OUTER JOIN [tickets_ticket] ON ([auth_user].[id] = [tickets_ticket].[capturer_id])
LEFT OUTER JOIN [tickets_ticket] T3 ON ([auth_user].[id] = T3.[responsible_id])
LEFT OUTER JOIN [tickets_ticket_watchers] ON ([auth_user].[id] = [tickets_ticket_watchers].[user_id])
GROUP BY
[auth_user].[id]
The weird thing is that if I comment out any two lines in the above, it runs in less that 1s, but it doesn't seem to matter which lines I remove (although obviously I can't remove a join without also removing the relevant SELECT line).
EDIT 3:
The python code which generated this is:
User.objects.annotate(
Count('tickets_captured'),
Count('assigned_tickets'),
Count('tickets_watched')
)
A look at the execution plan shows that SQL Server is first doing a cross-join on all the table, resulting in about 280 million rows, and 6Gb of data. I assume that this is where the problem lies, but why is it happening?
SQL Server is doing exactly what it was asked to do. Unfortunately, Django is not generating the right query for what you want. It looks like you need to count distinct, instead of just count: Django annotate() multiple times causes wrong answers
As for why the query works that way: The query says to join the four tables together. So say an author has 2 captured tickets, 3 assigned tickets, and 4 watched tickets, the join will return 2*3*4 tickets, one for each combination of tickets. The distinct part will remove all the duplicates.
what about this?
SELECT auth_user.*,
C1.tickets_captured__count
C2.assigned_tickets__count
C3.tickets_watched__count
FROM
auth_user
LEFT JOIN
( SELECT capturer_id, COUNT(*) AS tickets_captured__count
FROM tickets_ticket GROUP BY capturer_id ) AS C1 ON auth_user.id = C1.capturer_id
LEFT JOIN
( SELECT responsible_id, COUNT(*) AS assigned_tickets__count
FROM tickets_ticket GROUP BY responsible_id ) AS C2 ON auth_user.id = C2.responsible_id
LEFT JOIN
( SELECT user_id, COUNT(*) AS tickets_watched__count
FROM tickets_ticket_watchers GROUP BY user_id ) AS C3 ON auth_user.id = C3.user_id
WHERE C1.tickets_captured__count > 0 OR C2.assigned_tickets__count > 0
--WHERE C1.tickets_captured__count is not null OR C2.assigned_tickets__count is not null -- also works (I think with beter performance)

Select from two tables giving more rows than expected

I am not sure why this behaves this way. I need to select few values from two tables based on some criteria which should be clear from the query i tried below.
query = #"SELECT n.borrower, a.sum, n.lender FROM Notification AS n, Acknowledgment AS a
WHERE n.deleted=#del2 AND n.id IN (SELECT parent_id FROM Acknowledgment
WHERE status=#status AND deleted=#del1)";
This returns more rows (12) than expected.
I have two tables Notification and Acknowledgment both which have field "sum". When I try the query below it gives the correct 3 rows as expected.
#"SELECT n.borrower, n.sum, n.lender FROM Notification AS n
WHERE n.deleted=#del2 AND n.id IN (SELECT parent_id FROM Acknowledgment
WHERE status=#status AND deleted=#del1)";
Now I need to extend this query so that I need a.sum and not n.sum. But when I try the first query, it gives a lot more rows, I mean the WHERE condition doesn't work. I dunno if its a quirk with MS Access or something wrong with query. I appreciate an alternate implementation in access if my query seems fine 'cos it simply doesn't work! :)
I have read here that different databases implement select in different ways. Dunno if its something specific with access..
After suggestion from Li0liQ, I tried this:
#"SELECT n.borrower, a.sum, n.lender FROM Notification AS n
INNER JOIN Acknowledgment AS a ON a.parent_id = n.id AND a.status=#status AND a.deleted=#deleted1
WHERE n.deleted=#deleted2"
But I now get a "JOIN expression not supported" error.
This is expected behavior because of the cartesian product:
FROM Notification AS n, Acknowledgment AS a
If you have 10 notifications and 5 acknowledgements, you'll get 50 rows in the result, representing all possible combinations of a notification and an acknowledgement. That set is then filtered by the WHERE clause. (That's standard for SQL, not specific to MS Access.)
It sounds like you need a JOIN:
FROM Notification AS n INNER JOIN Acknowledgement AS a ON n.id = a.parent_id
You can then get rid of the subquery:
WHERE n.deleted=#del2 AND a.status=#status AND a.deleted=#del1
EDIT
As requested by nawfal, here is the solution he arrived at, which essentially incorporates the above recommendations:
string query = #"SELECT n.borrower, a.sum, n.lender FROM Notification AS n
INNER JOIN Acknowledgment AS a ON a.parent_id=n.id
WHERE a.status=#status AND a.deleted=#deleted1 AND n.deleted=#deleted2";
In first query you seem to be trying to perform a JOIN.
However you end up performing CROSS JOIN, i.e. you query for all possible combinations from both tables (I bet you have 4 rows in the Acknowledgment table).
I hope the following query could do the trick or at least help you think in the right direction:
SELECT
n.borrower, a.sum, n.lender
FROM
Notification AS n
INNER JOIN
Acknowledgment AS a
ON
a.parent_id = n.id
WHERE
n.deleted=#del2 AND a.status=#status AND a.deleted=#del1
Something seems to be wrong with Access, that I could get this working only by reconstructing the query provided by the answerers here this way:
string query = SELECT n.borrower, a.sum, n.lender FROM Notification AS n
INNER JOIN Acknowledgment AS a ON a.parent_id=n.id
WHERE a.status=#status AND a.deleted=#deleted1 AND n.deleted=#deleted2;