How to troubleshoot ORA-02049 and lock problems in general with Oracle - sql

I am getting ORA-02049 occasionally for some long-running and/or intensive transactions. There is seemingly no pattern to this, but it happens on a simple INSERT.
I have no clue how to get any sort of information out or Oracle, but there has to be a way? A log over locking or atleast a way to see current locks?

One possible way might be to increase the INIT.ORA parameter for distributed_lock_timeout to a larger value. This would then give you a longer time to observe the v$lock table as the locks would last for longer.
To achieve automation of this, you can either
Run an SQL job every 5-10 seconds that logs the values of v$lock or the query that sandos has given above into a table and then analyze it to see which session was causing the lock.
Run a STATSPACK or an AWR Report. The sessions that got locked should show up with high elapsed time and hence can be identified.
v$session has 3 more columns blocking_instance, blocking_session, blocking_session_status that can be added to the query above to give a picture of what is getting locked.

Use this query to determine possible blocking locks:
SELECT se.username,
NULL,
se.sid,
DECODE( se.command,
0, 'No command',
1, 'CREATE TABLE',
2, 'INSERT',
3, 'SELECT',
4, 'CREATE CLUSTER',
5, 'ALTER CLUSTER',
6, 'UPDATE',
7, 'DELETE',
8, 'DROP CLUSTER',
9, 'CREATE INDEX',
10, 'DROP INDEX',
11, 'ALTER INDEX',
12, 'DROP TABLE',
13, 'CREATE SEQUENCE',
14, 'ALTER SEQUENCE',
15, 'ALTER TABLE',
16, 'DROP SEQUENCE',
17, 'GRANT',
18, 'REVOKE',
19, 'CREATE SYNONYM',
20, 'DROP SYNONYM',
21, 'CREATE VIEW',
22, 'DROP VIEW',
23, 'VALIDATE INDEX',
24, 'CREATE PROCEDURE',
25, 'ALTER PROCEDURE',
26, 'LOCK TABLE',
27, 'NO OPERATION',
28, 'RENAME',
29, 'COMMENT',
30, 'AUDIT',
31, 'NOAUDIT',
32, 'CREATE DATABASE LINK',
33, 'DROP DATABASE LINK',
34, 'CREATE DATABASE',
35, 'ALTER DATABASE',
36, 'CREATE ROLLBACK SEGMENT',
37, 'ALTER ROLLBACK SEGMENT',
38, 'DROP ROLLBACK SEGMENT',
39, 'CREATE TABLESPACE',
40, 'ALTER TABLESPACE',
41, 'DROP TABLESPACE',
42, 'ALTER SESSION',
43, 'ALTER USER',
44, 'COMMIT',
45, 'ROLLBACK',
46, 'SAVEPOINT',
47, 'PL/SQL EXECUTE',
48, 'SET TRANSACTION',
49, 'ALTER SYSTEM SWITCH LOG',
50, 'EXPLAIN',
51, 'CREATE USER',
52, 'CREATE ROLE',
53, 'DROP USER',
54, 'DROP ROLE',
55, 'SET ROLE',
56, 'CREATE SCHEMA',
57, 'CREATE CONTROL FILE',
58, 'ALTER TRACING',
59, 'CREATE TRIGGER',
60, 'ALTER TRIGGER',
61, 'DROP TRIGGER',
62, 'ANALYZE TABLE',
63, 'ANALYZE INDEX',
64, 'ANALYZE CLUSTER',
65, 'CREATE PROFILE',
67, 'DROP PROFILE',
68, 'ALTER PROFILE',
69, 'DROP PROCEDURE',
70, 'ALTER RESOURCE COST',
71, 'CREATE SNAPSHOT LOG',
72, 'ALTER SNAPSHOT LOG',
73, 'DROP SNAPSHOT LOG',
74, 'CREATE SNAPSHOT',
75, 'ALTER SNAPSHOT',
76, 'DROP SNAPSHOT',
79, 'ALTER ROLE',
85, 'TRUNCATE TABLE',
86, 'TRUNCATE CLUSTER',
88, 'ALTER VIEW',
91, 'CREATE FUNCTION',
92, 'ALTER FUNCTION',
93, 'DROP FUNCTION',
94, 'CREATE PACKAGE',
95, 'ALTER PACKAGE',
96, 'DROP PACKAGE',
97, 'CREATE PACKAGE BODY',
98, 'ALTER PACKAGE BODY',
99, 'DROP PACKAGE BODY',
TO_CHAR(se.command) ) command,
DECODE(lo.type,
'MR', 'Media Recovery',
'RT', 'Redo Thread',
'UN', 'User Name',
'TX', 'Transaction',
'TM', 'DML',
'UL', 'PL/SQL User Lock',
'DX', 'Distributed Xaction',
'CF', 'Control File',
'IS', 'Instance State',
'FS', 'File Set',
'IR', 'Instance Recovery',
'ST', 'Disk Space Transaction',
'TS', 'Temp Segment',
'IV', 'Library Cache Invalidation',
'LS', 'Log Start or Switch',
'RW', 'Row Wait',
'SQ', 'Sequence Number',
'TE', 'Extend Table',
'TT', 'Temp Table',
'JQ', 'Job Queue',
lo.type) ltype,
DECODE( lo.lmode,
0, 'NONE', /* Mon Lock equivalent */
1, 'Null Mode', /* N */
2, 'Row-S (SS)', /* L */
3, 'Row-X (SX)', /* R */
4, 'Share (S)', /* S */
5, 'S/Row-X (SSX)', /* C */
6, 'Excl (X)', /* X */
lo.lmode) lmode,
DECODE( lo.request,
0, 'NONE', /* Mon Lock equivalent */
1, 'Null', /* N */
2, 'Row-S (SS)', /* L */
3, 'Row-X (SX)', /* R */
4, 'Share (S)', /* S */
5, 'S/Row-X (SSX)', /* C */
6, 'Excl (X)', /* X */
TO_CHAR(lo.request)) request,
lo.ctime ctime,
DECODE(lo.block,
0, 'No Block',
1, 'Blocking',
2, 'Global',
TO_CHAR(lo.block)) blkothr,
'SYS' owner,
ro.name image
FROM v$lock lo,
v$session se,
v$transaction tr,
v$rollname ro
WHERE se.sid = lo.sid
AND se.taddr = tr.addr(+)
AND tr.xidusn = ro.usn(+)
ORDER BY sid

Try increasing the SHARED_POOL_SIZE value in init.ora . If that fails try ALTER SYSTEM FLUSH SHARED_POOL
Also see this.

Could it be a bitmap index causing the error as described here?

Ok, this was a silly problem.
We are using Entity Framework 6.0 (upgraded to 6.2, but no change), Oracle.ManagedDataAccess +EntityFramework 12.2.1100, .NET 4.5.
I was getting ORA-02049: timeout: distributed transaction waiting for lock with the following query:
update "schemaname"."tablename"
set "DUE_DATE" = :p0
where ("ID" = :p1)
(via EF context.Database.Log event). A really simple query, shouldn't have any issues.
Well, I was using the same login on the remote server, on my local debugger, and in Oracle SQL Developer. A co-worker pointed out I should kill all these multiple connections while debugging .... and it worked. So the solution in my case was not to connect to the database multiple times with the same login.

Related

Before insert trigger doesn't change column value

I have this simple trigger, which should change the data before insert.
CREATE OR REPLACE function product_fts_create_trg() returns TRIGGER AS $$
BEGIN
new.name := 'example';
return new;
end
$$ LANGUAGE 'plpgsql';
CREATE TRIGGER "product_fts_create_trigger" BEFORE INSERT ON "productmodel" FOR EACH ROW EXECUTE procedure product_fts_create_trg();
But it doesn't change column value. I thought, that this trigger doesn't processed, but if i change new.name to null (this column has a not null constraint), i got a error, which indicates, that trigger are works.
I use Tortoise ORM with sql query down below (i guess that this didn't ruin trigger, but mb it will be useful for discussion)
INSERT INTO "productmodel" ("id","created_at","modified_at","deleted","deleted_at","fts","name","short_description","long_description","price","old_price","currency","status","weight","length","width","height","count_available","address","text_verification","product_verified","delivery_types","delivery_payed_by","user_id","video_id","video_verification_id") VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26): ['41ab62f0-1cab-47b5-a360-fcc9fbaf3b69', datetime.datetime(2022, 11, 1, 9, 39, 33, 360691, tzinfo=<UTC>), datetime.datetime(2022, 11, 1, 9, 39, 33, 360713, tzinfo=<UTC>), False, None, None, 'Iphone 20', None, None, 100.0, 100.0, 'rub', 'review', None, None, None, None, 0, None, None, False, '["pickpoint"]', 'self-delivery', 3, None, None]
Very interesting that I have BEFORE UPDATE trigger and it works.
create function product_fts_trg() returns trigger
language plpgsql
as
$$
BEGIN
if old.name <> new.name or old.short_description <> new.short_description or old.long_description <> new.long_description
then new.fts := 'example';
end if;
RETURN new;
end
$$;
alter function product_fts_trg() owner to vlad;

Sorting Pandas dataframe by multiple conditions

I have a large dataframe (thousands of rows by hundreds of columns), a short excerpt is as the following:
data = {'Step':['', '', '', 'First', 'First', 'Second', 'Third', 'Second', 'First', 'Second', 'First', 'First', 'Second', 'Second'],
'Stuff':['tot', 'white', 'random', 7583, 3563, 824, 521, 7658, 2045, 33, 9823, 5, 8090, 51],
'Mark':['marking', '', '', 1, 5, 5, 5, 1, 27, 27, 1, 6, 1, 9],
'A':['item_a', 100, 'st1', 142, 2, 2, 2, 100, 150, 105, 118, 118, 162, 156],
'B':['skill', 66, 'abc', 160, 2, 130, 140, 169, 1, 2, 130, 140, 144, 127],
'C':['item', 50, 'st1', 2000, 2, 65, 2001, 1999, 1, 2, 2000, 4, 2205, 2222],
'D':['item_c', 100, 'st1', 433, 430, 150, 170, 130, 1, 2, 300, 4, 291, 606],
'E':['test', 90, 'st1', 111, 130, 5, 10, 160, 1, 2, 232, 4, 144, 113],
'F':['done', 80, 'abc', 765, 755, 5, 10, 160, 1, 2, 733, 4, 666, 500],
'G':['nd', 90, 'mag', 500, 420, 5, 10, 160, 1, 2, 300, 4, 469, 500],
'H':['prt', 100, 'st1', 999, 200, 5, 10, 160, 1, 2, 477, 4, 620, 7],
'Name':['NS', '', '', "Pat", "Lucy", "Lucy", "Lucy", "Nick", "Kirk", "Kirk", "Joe", "Nico", "Nico", "Bryan"],
'Value':[ -1, 0, 0, 0, 3, 6, 5, 0, 7, 7, 0, 6, 0, 1]}
df = pd.DataFrame(data)
I need to sort this dataframe according to the following conditions that have to be satisfied all together:
In the "Name" column, names that are the same are to remain grouped (e.g. there are 3
records of "Lucy" next to each other, and they cannot be moved apart)
For each group of names, the appearance order has to remain the one
given by the "Step" column (e.g. the first appearance of "Lucy" is
related to the value "First" in the "Step" column, the second to
"Second" and so on)
All the remaining names that in the "Value" column have a value = 0,
have to be moved below the others (e.g. "Pat" can be moved after the
others, but not "Nico" because there are two records of "Nico" and
the other one has a value = 6)
The first three rows cannot be moved
What I have done is to concatenate different sub-dataframes:
df_groupnames=df[df.duplicated(subset=['Name'], keep=False)]
df_nogroup = df[~df.duplicated(subset=['Name'], keep=False)]
df_nogroup_high = df_nogroup[df_nogroup["Value"] > 0 ]
df_nogroup_null = df_nogroup[df_nogroup["Value"] == 0]
# Let's concatenate these dataframes to get the sorted one
df_sorted = pd.concat([df_groupnames, df_nogroup_high, df_nogroup_null])
It works, but I wonder if there's a smarter, simpler way, and maybe faster, to obtain the same.
Thank you for your attention.

How to alias a list of values in SQL

I need to see if any of a set of columns contains a value in a list.
E.G
...
SELECT *
FROM Account
WHERE
NOT (
AccountWarningCode1 IN (02, 05, 15, 20, 21, 24, 31, 36, 40, 42, 45, 47, 50, 51, 52, 53, 55, 56, 62, 65, 66, 78, 79, 84, 110, 119, 120, 121, 125, 202)
OR AccountWarningCode2 IN (02, 05, 15, 20, 21, 24, 31, 36, 40, 42, 45, 47, 50, 51, 52, 53, 55, 56, 62, 65, 66, 78, 79, 84, 110, 119, 120, 121, 125, 202)
OR AccountWarningCode3 IN (02, 05, 15, 20, 21, 24, 31, 36, 40, 42, 45, 47, 50, 51, 52, 53, 55, 56, 62, 65, 66, 78, 79, 84, 110, 119, 120, 121, 125, 202)
...
)
The above does work, but what i'd like to do instead is alias the list some how so I don't repeat myself quite as much.
For example (this doesn't actually work)
WITH bad_warnings AS (02, 05, 15, 20, 21, 24, 31, 36, 40, 42, 45, 47, 50, 51, 52, 53, 55, 56, 62, 65, 66, 78, 79, 84, 110, 119, 120, 121, 125, 202)
SELECT *
FROM Account
WHERE
NOT (
AccountWarningCode1 IN bad_warnings
OR AccountWarningCode2 IN bad_warnings
OR AccountWarningCode3 IN bad_warnings
...
)
Is this possible in T-SQL?
Your second version is actually close. You can use a common table expression:
WITH bad_warnings(code) AS(
SELECT * FROM(VALUES
('02'), ('05'), ('15'), ('20'), ('21'), ('24'),
('31'), ('36'), ('40'), ('42'), ('45'), ('47'),
('50'), ('51'), ('52'), ('53'), ('55'), ('56'),
('62'), ('65'), ('66'), ('78'), ('79'), ('84'),
('110'), ('119'), ('120'), ('121'), ('125'), ('202')
) a(b)
)
SELECT *
FROM Account
WHERE
NOT (
AccountWarningCode1 IN (SELECT code FROM bad_warnings)
OR AccountWarningCode2 IN (SELECT code FROM bad_warnings)
OR AccountWarningCode3 IN (SELECT code FROM bad_warnings)
)
This is the way to define a derived table with your values as CTE.
WITH bad_warnings AS
(SELECT val FROM (VALUES(02),(05),(15),(20),(21),(24),(31),(36),(40),(42),(45),(47),(50),(51),(52),(53),(55),(56),(62),(65),(66),(78),(79),(84),(110),(119),(120),(121),(125),(202)) AS tbl(val)
)
SELECT *
FROM bad_warnings
You can use this as any table in your query.
Your check would be something like
WHERE SomeValue IN(SELECT val FROM badWarnings)
With NOT IN you would negate this list
Is this possible in T-SQL?
Yes, either use a table variable or a temporary table. Populate those inlist data in table variable and use it as many places within your procedure you want.
Example:
declare #inlist1 table(elem int);
insert into #inlist1
select 02
union
select 05
union
select 15
union
select 20
union
select 21
union
select 24
Use it now
WHERE
NOT (
AccountWarningCode1 IN (select elem from #inlist1)
(OR)
You can as well perform a JOIN operation saying
FROM Account a
JOIN #inlist1 i ON a.AccountWarningCode1 = i.elem
You can do it like this:
with bad_warnings as
(select '02'
union
select '15'
etc
)
select * from account
where not
(AccountWarningCode1 IN (SELECT code FROM bad_warnings
etc)

What is the difference between subquery and values when passed to NOT IN on Postgresql?

In Rails4 app (versions: rails 4.2.3, postgresql 9.3.5), I have model classes like below
class Message < ActiveRecord::Base
belongs_to :receiver, class_name: 'User'
belongs_to :sender, class_name: 'User'
validate :receiver, presence: true
validate :sender, presence: true
end
class User < ActiveRecord::Base
has_many :received_messages, class_name: 'Message', foreign_key: :receiver_id
has_many :sent_messages, class_name: 'Message', foreign_key: :sender_id
end
I want to get collection of users who are NOT received message from specific user, So I wrote these scopes:
class User < ActiveRecord::Base
...
scope :received_messages_from, -> (user) {
includes(:received_messages).
where('messages.sender_id': user.id).
references(:received_messages)
}
scope :not_received_messages_from, -> (user) {
includes(:received_messages).
where.not(id: received_messages_from(user).select(:id)).
references(:received_messages)
}
end
I have these rows in messages table:
message_00:
sender_user_id: 11
receiver_user_id: 12
message_01:
sender_user_id: 11
receiver_user_id: 12
message_02:
sender_user_id: 12
receiver_user_id: 11
message_11:
sender_user_id: 17
receiver_user_id: 11
message_12:
sender_user_id: 11
receiver_user_id: 17
message_13:
sender_user_id: 18
receiver_user_id: 12
message_14:
sender_user_id: 12
receiver_user_id: 18
message_15:
sender_user_id: 17
receiver_user_id: 12
message_16:
sender_user_id: 17
receiver_user_id: 13
message_17:
sender_user_id: 17
receiver_user_id: 14
So, User.received_messages_from(User.find(17)).pluck(:id) results: [11, 12, 13, 14], and User.not_received_messages_from(User.find(17)).pluck(:id) results sholdn't contain these ids.
But the not_received_messages_from scope dosen't work as it returning users who has received messages from specific user. This generates SQL like this (in this example, user's id is 17):
SELECT "users"."id" FROM "users"
LEFT OUTER JOIN "messages" ON "messages"."receiver_id" = "users"."id"
WHERE ("users"."id"
NOT IN (
SELECT "users"."id" FROM "users"
WHERE "messages"."sender_id" = 17))
User.not_received_messages_from(User.find(17)).pluck(:id) results:
[11, 12, 12, 12, 15, 16, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 17, 18, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100]
So, I tried fixing .select(:id) to .pluck(:id) in where in not_received_messages_from scope and this works.
scope :not_received_messages_from, -> (user) {
includes(:received_messages).
where.not(id: received_messages_from(user).pluck(:id)).
references(:received_messages)
}
SQL:
SELECT "users"."id" FROM "users"
LEFT OUTER JOIN "messages" ON "messages"."receiver_id" = "users"."id"
WHERE ("users"."id" NOT IN (11, 12, 13, 14))
User.not_received_messages_from(User.find(17)).pluck(:id) results:
[15, 16, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 17, 18, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100]
I think the defferece between two SQLs is only subquery or static ids array passed to 'NOT IN'. Why the results differ each other?
This is likely because your sub-select is not returning the expected response.
SELECT "users"."id" FROM "users"
LEFT OUTER JOIN "messages" ON "messages"."receiver_user_id" = "users"."id"
WHERE ("users"."id"
NOT IN (
SELECT "users"."id" FROM "users"
WHERE "messages"."sender_user_id" = 17))
It's been a while since I've looked at PostresQL joins, but I don't know what, if anything that sub-select would produce. It's operating with a join, but... which one? There's no reference in the documentation that explains that.

Query for many-to-many association

There are 3 models in my project: Users, Games, GameVisits
class User < ActiveRecord::Base
has_many :game_visits
has_many :games, through: :game_visits
end
class Game < ActiveRecord::Base
has_many :game_visits
has_many :users, through: :game_visits
end
class GameVisit < ActiveRecord::Base
self.table_name = :users_games
enum status: [:visited, :not_visited, :unknown]
belongs_to :user
belongs_to :game
end
My frontend is written in angularjs, so I want a function that for a list of games returns array of hashes each containing name of user and status or nil. Data is presented as table, cols - players, rows - games, cell - presence of player on game.
This is example of such function:
def team_json(game_ids = nil)
game_ids ||= Game.pluck(:id)
games = Game.find(game_ids)
users = self.users
result = []
games.each do |game|
record = {name: game.name, date: game.date }
users_array = []
users.each do |user|
users_array << {
name: user.name,
status: user.game_visits.find_by_game_id(game.id)
}
end
record[:users] = users_array
result << record
end
result
end
Output:
[{:name=>"game 0", :date=>Fri, 19 Dec 2014 11:16:20 UTC +00:00, :users=>[{:name=>"Bob", :status=>nil}]},
{:name=>"game 1", :date=>Sat, 20 Dec 2014 11:16:20 UTC +00:00, :users=>[{:name=>"Bob", :status=>nil}]},
{:name=>"game 2", :date=>Fri, 19 Dec 2014 11:16:20 UTC +00:00, :users=>[{:name=>"Bob", :status=>nil}]}]
But this function produces tons of sql queries for GameVisits:
(1.0ms) SELECT "games"."id" FROM "games"
Game Load (1.3ms) SELECT "games".* FROM "games" WHERE "games"."id" IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101)
User Load (0.6ms) SELECT "users".* FROM "users" WHERE "users"."team_id" = $1 [["team_id", 1]]
GameVisit Load (0.6ms) SELECT "users_games".* FROM "users_games" WHERE "users_games"."user_id" = $1 AND "users_games"."game_id" = $2 LIMIT 1 [["user_id", 7], ["game_id", 1]]
GameVisit Load (0.6ms) SELECT "users_games".* FROM "users_games" WHERE "users_games"."user_id" = $1 AND "users_games"."game_id" = $2 LIMIT 1 [["user_id", 7], ["game_id", 2]]
GameVisit Load (0.5ms) SELECT "users_games".* FROM "users_games" WHERE "users_games"."user_id" = $1 AND "users_games"."game_id" = $2 LIMIT 1 [["user_id", 7], ["game_id", 3]]
..............
How can I optimise it?
Your user.game_visits.find_by_game_id(game.id) is what's firing all those queries. You can use eager-loading to get around this problem.
Change your assignment of users to the following:
users = self.users.includes(:game_visits)
Then change the value assignment of status in users_array to this:
status: user.game_visits.find{ |gv| gv.game_id == game.id }
The above line is using the enumerable's find, not ActiveRelation's. If you stick with find_by_game_id or some other find of AR, it'll still fire a query.
An additional optimization could be creating a hash for game_visits so that you can get around find altogether to do something like this:
user_game_visits_hash = \
user.game_visits.each_with_object({}) do |gv, hash|
hash[gv.game_id] = gv
end
# more code here
status: user_game_visits_hash[game.id]
And a minor point: In case your game_ids in the method's arguments are nil, the first two lines of your method will unnecessarily fire two queries, because you pluck all ids from games just to fetch all games. You can change that to this:
games = game_ids ? Game.find(game_ids) : Game.all