Users table:
LoginLog table:
How can I log Name, Password, LastLogonTime to LoginLog table when Users table LastLogonTime column is updated and insert a row?
You need a fairly simple trigger on the update of the Users table. The trickier part is being aware of the fact that triggers are fired only once for each statement - and such a statement could potentially update multiple rows which would then be in your Inserted and Deleted pseudo tables in the trigger.
You need to write your trigger to be aware of this set-based manner and handle it correctly. In order to be properly able to link the old and new values, your table Users must have a proper primary key (you didn't mention anything about that) - something like a UserId or the like.
Try something like this:
CREATE TRIGGER dbo.trg_LogUserLogon
ON dbo.Users
FOR UPDATE
AS
-- inspect the Inserted (new values, after UPDATE) and Deleted (old values, before UPDATE)
-- pseudo tables to find out which rows have had an update in the LastLogonTime column
INSERT INTO dbo.LoginLog (Name, Password, LastLogonTime)
SELECT
i.Name, i.Password, i.LastLogonTime
FROM
Inserted i
INNER JOIN
-- join the two sets of data on the primary key (which you didn't specify)
-- could be i.UserId = d.UserId or something similar
Deleted d on i.PrimaryKey = d.PrimaryKey
WHERE
-- only select those rows that have had an update in the LastLogonTime column
i.LastLogonTime <> d.LastLogonTime
But please also by all means take #Larnu's comments about not EVER storing passwords in plain text into account! This is really a horribly bad thing to do and needs to be avoided at all costs.
Related
new to oracle and sql but trying to learn triggers. I think I'm having some syntax errors here, but let me explain what I am trying to do.
I have two tables: 1. group_membership with the columns
user_internal_id | group_internal_id (FK) | joined_time
and 2. group_details with the columns
group_internal_id (PK) | group_name | group_owner | created_time | movie_cnt | member_cnt|
(PK and FK stand for Primary Key and Foreign Key that relates to that Primary Key respectively.)
What I want to do:
After a new row is inserted into the group_membership table, I want to
update the value of member_cnt in the group_details table with the amount of times a particular group_internal_id appears in the group_membership table.
--
Now, my DBA for the app we are working on has created a trigger that simply updates the member_cnt of a particular group by reading the group_internal_id of the row inserted to group_membership, then adding 1 to the member_cnt. Which works better probably, but I want to figure out how come my trigger is having errors. Here is the code below
CREATE OR REPLACE TRIGGER set_group_size
AFTER INSERT ON group_membership
FOR EACH ROW
DECLARE g_count NUMBER;
BEGIN
SELECT COUNT(group_internal_id)
INTO g_count
FROM group_membership
GROUP BY group_internal_id;
UPDATE group_details
SET member_cnt = g_count
WHERE group_details.group_internal_id = group_membership.group_internal_id;
END;
The errors I'm receiving are:
Error(7,5): PL/SQL: SQL Statement ignored
Error(9,45): PL/SQL: ORA-00904: "GROUP_MEMBERSHIP"."GROUP_INTERNAL_ID": invalid identifier
I came here because my efforts have bene futile in troubleshooting. Hope to hear some feedback. Thanks!
The immeidate issue with your code is the update query of your trigger:
UPDATE group_details
SET member_cnt = g_count
WHERE group_details.group_internal_id = group_membership.group_internal_id;
group_membership is not defined in that scope. To refer to the value on the rows that is being inserted, use pseudo-table :new instead.
WHERE group_details.group_internal_id = :new.group_internal_id;
Another problem is the select query, that might return multiple rows. It would need a where clause that filters on the newly inserted group_internal_id:
SELECT COUNT(*)
INTO g_count
FROM group_membership
WHERE group_internal_id = :new.group_internal_id;
But these obvious fixes are not sufficient. Oracle won't let you select from the table that the trigger fired upon. On execution, you would meet error:
ORA-04091: table GROUP_MEMBERSHIP is mutating, trigger/function may not see it
There is no easy way around this. Let me suggest that this whole design is broken; the count of members per group is derived information, that can easily be computed on the fly whenever needed. Instead of trying to store it, you could, for example, use a view:
create view view_group_details as
select group_internal_id, group_name,
(
select count(*)
from group_membership gm
where gm.group_internal_id = gd.group_internal_id
) member_cnt
from group_details gd
Agree with #GMB that your design is fundamentally flawed, but if you insist on keeping a running count there is a easy solution to mutating they point out. The entire process is predicated on maintaining the count in group_details.member_count column. Therefore since that column has the previous count you do not need to count them - so eliminate the select. Your trigger becomes:
create or replace trigger set_group_size
after insert on group_membership
for each row
begin
update group_details
set member_cnt = member_cnt + 1
where group_details.group_internal_id = :new.group_internal_id;
end;
Of course then you need to handle group_membership Deletes and Updates of group_internal_id. Also, what happens when 2 users process the same group_membership simultaneously? Maintaining a running total for a derivable column is just not worth the effort. Best option just create the view as GMB suggested.
I have a bankcustomer table which looks like this:
create table bankcustomer
(
cpr char(10) primary key,
name varchar(30) not null
)
And an account table which looks like this:
create table account
(
accountnr int identity(1001,1) primary key,
accountowner char(10) foreign key references bankcustomer,
created date not null,
balance decimal(14,2) not null
)
I want to write a trigger in SQL Server that limits a bankcustomer so that a bankcustomer can have no more than 3 accounts in the account table.
I create an account for a specific bank customer by inserting a value in the accountowner column in the account table that matches a cpr from the bankcustomer table (when inserting a record into the account table).
I have this code so far:
create trigger mytrigger4
on account
for insert
as
if exists (select count(*)
from inserted
join bankcustomer on inserted.accountowner = bankcustomer.cpr
where cpr = inserted.accountowner
having count(*) > 3)
begin
rollback tran
raiserror('A customer must have a maximum of 3 accounts', 16, 1)
end
go
The problem is that I can keep creating accounts (insert records in the account table) for a customer even though the customer already has 3 accounts. Which means the code in the trigger does not work at all.
Any help would be appreciated!
Let's think about your code. First, it is apparent that you test using single row inserts only. And that probably carries over into your sql code generally. That's bad, because an insert (or update or delete or merge) can affect any number of rows. While you can expect that the majority of inserts from an application are likely to be single rows, there are always situations that affect multiple rows. And that is an assumption that should always be in your mind when writing sql code generally - and triggers specifically.
Your test is based on exists. That is testing for the existence of rows generate by the query inside the exists clause. So - look carefully at your query. First, you count but do not group. Therefore, you are counting all the rows generated by the query. This is incorrect because of the assumption mentioned earlier. But let's ignore that for the moment and examine the select statement alone.
Your select statement joins inserted to the parent bankcustomer. The join is correct but why is there a where clause? And let's sidetrack into best practices. Always - ALWAYS - give each table a useful alias and use that alias when referencing columns. Why? Because this makes it easier for others to read and understand your query. BTW - a useful alias is not a single character. Yep - writing code can be a little work.
Let's continue. Your query counts all rows in the resultset. If you insert a single row, what is the result of your join? We know that an account is associated with a single bankcustomer. So when you insert a single row into account, the join will produce A SINGLE ROW. And counting that single row resultset will always produce a single row with the value of - tadah - 1. Now there is a way to cause your trigger generate an error. Insert 4 or more rows with a single statement. The error will probably not be accurate, but it will kill the transaction and display a message.
So you see that your logic is flawed. You need to count rows in the actual table (account), not inserted. But not all the rows - because that would be inefficient. You just need to consider all rows that "share" the accountowner values found in inserted. Note the plural "values". This is where your single row assumption fails. So, how to do that? Here is one way to write your trigger. Note that the first query is included to let you "see" what the count query is producing - this is for debugging only. Production triggers should never return a resultset in any fashion.
alter trigger mytrigger4
on account
for insert
as begin
select cust.cpr, count(*)
from bankcustomer as cust join account as acc on cust.cpr = acc.accountowner
where exists (select * from inserted as ins where ins.accountowner = cust.cpr)
group by cust.cpr;
if exists (select cust.cpr, count(*)
from bankcustomer as cust join account as acc on cust.cpr = acc.accountowner
where exists (select * from inserted as ins where ins.accountowner = cust.cpr)
group by cust.cpr
having count(*) > 3)
begin
rollback tran
raiserror('A customer must have a maximum of 3 accounts', 16, 1)
end
end;
go
I'll leave it to you to actually test is thoroughly - which includes the use of insert statements that insert multiple rows. And, of course, you will vary the test data to include customers that have no accounts, less than 3 accounts, exactly 3 accounts, and more than 3 accounts (because sometimes things happen and extra accounts get added despite your best efforts).
So there's this table of just about 40,000 rows I am looking to update. Colleague said it's best to incrementally update the table instead of complete delete and load.
So I've tried hashing out the design and logic of a script to do this, but my inexperience is getting to me. I just don't know what's efficient and unneeded to incrementally update a table.
Currently, the warehouse looks like this: data comes from source into a table (let's call this T1) in Teradata. Then it's sent into another table (let's call this T2) in Teradata with some added fields such as timestamp. Lastly, a view is built on that last table for security reasons.
So with that laid out, I was thinking of creating a temp/volatile table with data from T1. This would have all the data up to the time the script is run with new records. Then, go through the entire table seeing if the ID (primary index) already exists in T2, and if not, add it to another temp table. Then somehow combine the second temp table with T2 and override T2 and build a view on top of that.
Does this make any sense?
There's also the possibility of records being updated. So they would already exist in T2, but have updated data in a new version of T1. I think comparing the values of all the columns from T1 to T2 would be highly inefficient, but can't think of another way to do this
A 40,000 row delete and insert should be pretty painless for any modern database. Ditto for updates.
The real reason for doing and incremental delete/update/insert is so you can log the changes and timestamp rows in the permanent table with the date/time of nsertion and/or last update. The usual technique goes something like this:
remove rows from the permanent table that don't exist in the temp table
update rows that exist in both tables
insert rows that exist in the temp table, but don't exist in the permanent table.
Looking at the Teradata docs, that would be something like this (no warranties about this being syntactically correct, since I don't have a Teradata instance to play with):
delete permanent p
where not exists ( select *
from temp t
where t.id = p.id
)
update p
from permanent p ,
temp t
set ...
where t.id = p.id
insert permanent
select ...
from temp t
where not exists ( select *
from permanent p
where p.id = t.id
)
One might note that the deletes might get a little hairy if there are dependent foreign key constraints involved.
One might also note that on the update, the where clause might get a tad...complicated if you want to check for actual changes to column values: not much point in updating a row if nothing has changed.
There's a Teradata MERGE command that you might find useful, check this post:
https://forums.teradata.com/forum/database/merge-syntax-simple-version
merge into merge_tmp as t using (select 1 as a,'stf' as b,'uuj' as c) as s
on t.a = s.a
when matched then update set c = s.c
when not matched then insert values (s.a,s.b,s.c);
If you need to match on more columns simple put an and in the on statement.
Edit: If you want to use MERGE you might also need to use a delete statement like the one in nicholas' post.
I have two tables in my database, one is Transactions and the other is TransactionHistories. The latter is essentially an auditing table, whereby a trigger executes on insert, update and delete on Transactions to capture a screenshot of the data.
I am successfully retrieving all of the data stored in the Transactions table where the columns match, but the difficulty comes where I am trying to retrieve data from another table using a foreign key. For instance:
The transaction table has a field "TransactionType_TransactionTypeId", but in the audit table we wish to store its 'name' equivalent as "TransactionTypeName". This needs to be populated from the "TransactionTypes" table, which has the fields "TransactionTypeId" and "Name".
I am struggling to write a query to retrieve this as we wish. I am trying something similar to the following but having little success:
SELECT #TransactionTypeName=Name
FROM TransactionTypes
WHERE inserted.TransactionType_TransactionTypeId=TransactionTypes.TransactionTypeId;
I'm assuming that is a syntactic nightmare. If someone could point me in the right direction I would be extremely grateful!
well to get a name you should do the following
select #TransactionTypeName = TT.Name
from inserted as i
left outer join TransactionTypes as TT on TT.TransactionTypeId = i.TransactionType_TransactionTypeId
but you have to know that inserted table can have more than one row, and you are getting value for only one row.
I am using the following to list the row count for all of my tables:
select convert(varchar(30),object_name(id)) [Table Name], rows, ModifiedOn from sysindexes
where object_name(id) not like 'sys%' and indid = 1
order by object_name(id)
I confess that I found this somewhere and only have a conceptual idea of what it is doing. But for my purposes, where I want to perform an application action and reverse engineer what happened in the database, it works well to identify new rows (I copy and paste before/after results into excel to compare).
Now, I would also like to know which tables have been updated. On (almost) all of my tables there is a ModifiedOn column, so I am hoping I can add the max of this to my output, which will tell me when the table's contents were last updated.
I have no idea how to join these two, and any help is appreciated.
I would strongly suggest against this approach, as it is DB dependent and unreliable.
Creating an ON INSERT on ON UPDATE trigger it the correct solution, then in the trigger you can put the new or updated data into a separate table which you can query. Triggers are the tool to monitor changes in the database without doing anything in the applications using them.
Example trigger for 'after update' on table MY_TABLE(id, name):
CREATE TRIGGER mark_changes AFTER UPDATE ON my_table FOR EACH ROW
BEGIN
INSERT INTO tracking_table VALUES "Change in table my_table", OLD.id, NEW.id
END$$
This assumes you have a table tracking_table(description, old_id, new_id)