Ruby File that Will Log All SQL INSERTs and SQL DELETEs (and only those commands) - sql

I am working with a PostgreSQL database. I have written a .rb file I am using to manipulate data in the database. I want to be able to log all the SQL INSERTs and DELETEs elicited by this file. How do I go about that?

At the start of your script, create the needed temporary tables, and adds two triggers, one on insert, one on delete, and have them fire for each row accordingly. it also works with rules:
create temporary table foo_log_ins (like foo);
create rule log_foo_ins as
on insert to foo
do also
insert into foo_log select new.*;
create temporary table foo_log_del (like foo);
create rule log_foo_del as
on delete to foo
do also
insert into foo_log_del select old.*;

Related

PL/SQL Replicating a table with a trigger on Oracle DB

I have never used triggers in PLSQL before, but I am now supposed to make a trigger that replicates the table of one database, and creates a copy of this table in another database. I am using AQT(Advanced Query Tool) as DBMS, and i have 2 database connections, and I need to copy the table and or data from DB1 to DB2. It's only based on one table that I need replicated, following tutorialspoint I have concluded that it should look somewhat like this:
`
CREATE OR REPLACE TRIGGER db_transfer
AFTER DELETE OR INSERT OR UPDATE ON X
WHEN (NEW.ID > 0)
I don't think i need for each since i want a copy of the whole table, and the condition is supposed to trigger this replication of the DB table. Is this the right approach?
EDIT
For anyone who uses AQT, they have a feature under Create -> Trigger -> and then click on the relevant tables etc to create it.

The "proper" way to atomically replace all contents in a PostgreSQL table?

In the project I have been recently working on, many (PostgreSQL) database tables are just used as big lookup arrays. We have several background worker services, which periodically pull the latest data from a server, then replace all contents of a table with the latest data. The replacing has to be atomic because we don't want a partially completed table to be seen by lookup-ers.
I thought the simplest way to do the replacing is something like this:
BEGIN;
DELETE FROM some_table;
COPY some_table FROM 'source file';
COMMIT;
But I found a lot of production code use this method instead:
BEGIN;
CREATE TABLE some_table_tmp (LIKE some_table);
COPY some_table_tmp FROM 'source file';
DROP TABLE some_table;
ALTER TABLE some_table_tmp RENAME TO some_table;
COMMIT;
(I omit some logic such as change the owner of a sequence, etc.)
I just can't see any advantage of this method. Especially after some discoveries and experiments. SQL statements like ALTER TABLE and DROP TABLE acquire an ACCESS EXCLUSIVE lock, which even blocks a SELECT.
Can anyone explain what problem the latter SQL pattern is trying to solve? Or it's wrong and we should avoid using it?

Why doesn't a PL/SQL block create a temporary table?

I would like to create and populate temporary table with data to process it inside loop statement like this:
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(tname) INTO cnt from tab where tname = 'MY_TEMP';
IF (cnt > 0) THEN
EXECUTE IMMEDIATE 'DROP TABLE MY_TEMP';
END IF;
EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE MY_TEMP (G NVARCHAR2(128), F NVARCHAR2(128), V NVARCHAR2(128)) ON COMMIT DELETE ROWS';
INSERT INTO MY_TEMP VALUES (N'G-value1', N'F-value1', N'V-value1');
INSERT INTO MY_TEMP VALUES (N'G-value2', N'F-value2', N'V-value2');
...
FOR record IN (SELECT G,F,V FROM MY_TEMP)
LOOP
... Do something sophisticated with record.G, record.F, record.V
END LOOP;
COMMIT;
END;
When I run this script inside PL-SQL Developer it tells me for the very first INSERT that MY_TEMP table or view doesn't exist even though my EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE ... ' statement seems to be executed without errors. I checked there is no MY_TEMP table inside tables list after script execution
When I run EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE ... ' alone it runs ok and MY_TEMP table is really created. After this the whole scripts runs ok.
How do I use this script without manually precreating MY_TEMP table ?
How do I use this script without manually precreating MY_TEMP table ?
You can't. Unless of course you run everything after the creation of the temporary table using EXECUTE IMMEDIATE. But I cannot for a second recommend that approach.
The point is not that your script fails to run, but that it fails to compile. Oracle won't start running your block if it can't compile it first. At the point Oracle tries to compile your PL/SQL block, the table doesn't exist. You have a compilation error, not a runtime error.
I suspect that you are more familiar with temporary tables in SQL Server and are trying to use temporary tables in Oracle in the same way. If this is the case, then you will need to know that there are differences between temporary tables in Oracle and in SQL Server.
Firstly, there's no such thing as a local temporary table (i.e. a table visible to only one connected user) in Oracle. Oracle does have global temporary tables, but only the data in a global temporary table is temporary. The table itself is permanent, so once it has been created it will only be dropped if you explicitly drop it. Compare this with SQL Server temporary tables, which are dropped once all relevant users disconnect.
I really don't think you need to be creating the temporary table in your block. It should be sufficient to create it once beforehand.
Why do want to drop and create the temp table? Simply create it and use it.
The only way around for your problem is to make the whole INSERT INTO temp_table statements into EXECUTE IMMEDIATE in this way you can BYPASS the TABLE check during COMPILE Time first.
But this way in my opinion is not good at all. There are some questions in my mind which has be answred before answering this question.
1) Why Temp Table is created evertime and Dropped.
We have option in GTT to keep or Remove Data after one Oracle Session.
2) Is this script a one time job ? If Yes then we can go for once GTT creation and the rest script will work fine.
The probloem is not wityh your first insert. It is with the compile of your block. The table does not exist, but you are referencing it. Try creating it beforhand so it is there such as it will be once the bloick finishes. Now the code is likely to compile as the reference to the table exists when you run it.
However, then you'll get into trouble with the drop as your code has a share lock on the table so you are not allowed to drop it.
You either have to make your selects dynamic, or make sure the table is created and dropped outrside the execution of your block.
Creating temporary table in Oracle is not best practice, instead use PIVOT

Keep a shadow copy of a table while retaining records removed from the original

This is probably laughably easy for an SQL expert, but SQL (although I can use it) is not really my thing.
I've got a table in a DB. (Let's call it COMPUTERS)
About 10.000 rows. 25 columns. 1 unique key: Column ASSETS.
Occasionally an external program will delete 1 or more of the rows, but isn't supposed to do that, because we still need to know some info from those rows before we can really delete the items.
We can't control the behavior of the external application so we came up with a different idea:
We want to create a second identical table (COMPUTERS_BACKUP) and initially fill this with a one-on-one copy of COMPUTERS.
After that, once a day copy new records from COMPUTERS to COMPUTERS_BACKUP and update those records in COMPUTERS_BACKUP where the original in COMPUTERS has changed (ASSETS column will never change).
That way we keep the last state of a record deleted from COMPUTERS.
Can someone supply the code for a stored procedure that can be scheduled to run once a day? I can probably figure this out myself, but it would take me several hours or so and I'm very pressed for time.
just create a trigger for insert computers table
CREATE TRIGGER newComputer
ON [Computers]
AFTER INSERT
Begin
INSERT INTO COMPUTERS_BACKUP
SELECT * FROM Inserted
End
It'll work when you insert new computer to computers table and it'll also insert the record to bakcup table
When you update computers you could change computers backup too with update trigger
CREATE TRIGGER newComputer
ON [Computers]
AFTER UPDATE
Begin
//can access before updating the record through SELECT * FROM Deleted
//can access after updating the record through SELECT * FROM Inserted
UPDATE Computers_BACKUP SET
(attributes) = inserted.(attribute)
WHERE id = inserted.id
End
At the end I guess you don't want to delete the backup when original record is deleted from computers table. You can chech more examples from msdn using triggers.
When a record removed from computers table
CREATE TRIGGER computerDeleted ON [Computers] AFTER DELETE
Begin
INSERT INTO Computers_BACKUP
SELECT * FROM Deleted
End
Besides creating triggers, you may look into enabling Change Data Capture, which is available in SQL Server Enterprise Edition. It may be an overshot, but it should be mentioned and you may find it useful for other tables and objects.
IMHO a possible solution, if you never delete records (only update) from that table in your application, can be to introduce an INSTEAD OF DELETE trigger
CREATE TRIGGER tg_computers_delete ON computers
INSTEAD OF DELETE AS
DELETE computers WHERE 1=2;
It will prevent the deletion of the records.
Here is SQLFiddle demo.
A trigger for Before Delete event can help you to guard this table:
CREATE TRIGGER backup_row_before_delete ON COMPUTERS_Table FOR Delete
as
INSERT INTO Computers_Backup
SELECT deleted.* from deleted
You can change deleted.* for deleted.col1, deleted.col2 if you want to keep certain columns only.
will delete 1 or more of the rows, but isn't supposed to do that
Then you have permission and integrity issues.
You can most certainly use a trigger to record deletions (and updates of course) but I would not recommend you use it purely to keep a copy of stuff you didn't want deleted in the first place!
Remove delete permissions if you have to or beef up your data integrity if you can. Without your schema it's hard to tell exactly how though.
Finally, use your (INSTEAD OF) trigger to check whatever conditions you need to prevent the delete when appropriate.

Dumping a table's content in sqlite3 to be imported into a new database

Is there an easy way of dumping a SQLite database table into a text string with insert statements to be imported into the same table of a different database?
In my specific example, I have a table called log_entries with various columns. At the end of every day, I'd like to create a string which can then be dumped into an other database with a table of the same structure called archive. (And empty the table log_entries)
I know about the attach command to create new databases. I actually wish to add it to an existing one rather than creating a new one every day.
Thanks!
ATTACH "%backup_file%" AS Backup;
INSERT INTO Backup.Archive SELECT * FROM log_entries;
DELETE FROM log_entries;
DETACH Backup;
All you need to do is replace %backup_file% with the path to your backup database. This approach considers that your Archive table is already defined and that you are using the same database file to cumulate your archive.
$ sqlite3 exclusion.sqlite '.dump exclusion'
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE exclusion (word string);
INSERT INTO "exclusion" VALUES('books');
INSERT INTO "exclusion" VALUES('rendezvousing');
INSERT INTO "exclusion" VALUES('motherlands');
INSERT INTO "exclusion" VALUES('excerpt');
...