How to encrypt/decrypt data in some columns of a table and when a new record gets inserted it should also get encrypt - sql

i know like this to insert a new record
INSERT INTO dbo.Customer_data (Customer_id, Customer_Name, Credit_card_number)
VALUES (25665, 'mssqltips4', EncryptByKey( Key_GUID('SymmetricKey1'), CONVERT(varchar,'4545-58478-1245') ) );
but i want to insert a new record with a normal insert statement which should get encrypted.
ex:
INSERT INTO dbo.Customer_data (Customer_id, Customer_Name, Credit_card_number)
VALUES (25665, 'mssqltips4','4545-58478-1245') ) );

Few months ago I had similar situation. A table containing personal data need to have some of the columns encrypted, but the table is used in legacy application and have many references.
So, I you can create a separate table to hold the encrypted data:
CREATE TABLE [dbo].[Customer_data_encrypted]
(
[customer_id] PRIMARY KEY -- you can create foreign key to the original one, too
,[name] VARBANRY(..)
,[cretit_card_numbe] VARBINARY(..)
);
Then create a INSTEAD OF INSERT UPDATE DELETE trigger on the original table.The logic in the trigger is simple:
on delete, delete from both tables
on update/insert - encrypt the data and insert in the new table; use some kind of mask to the original table (for example *** or 43-****-****-****)
Then, perform a initial migration to move the data from the original table to the new one and then mask it.
Performing the steps above are nice because:
every insert/update to the original table continue to works
you can create the trigger with EXECUTE AS OWNER in order to have access to the symmetric keys and perform changes directly in the T-SQL statement without opening the certificates or by users who have not access to them
in all reads references you are going to get mask data, so you are not worried for breaking the application critically
having trigger gives you ability to easy create and changes information
It depends on your environment and business needs because for one of the tables I have stored the encrypted value as new column, not separate table. So, choose what is more appropriate for you.

Related

Synchronizing data in tables in two different schemas in an Oracle database

I have an attendance table in one Oracle schema named attendance_database and another schema named payroll_database and its table name is payroll_attendance_table.
Is it possible if I insert new data in attendance_table then payroll_attendance_table auto synchronize (auto insert/update) with that new inserted data. I hear it can be done by a trigger. Is it, else is there any other way. I want to handle this in database end, not want to handle this by any back-end language.
Either:
Create a database link from attendance_database to payroll_database.
Create a row-level trigger on attendance_database.attendance_table to INSERT new rows across the database link into the payroll_database.payroll_attendance_table.
Or:
Create a database link from payroll_database to attendance_database.
Create payroll_attendance_table as a MATERIALIZED VIEW with FAST REFRESH ON COMMIT of attendance_database.attendance_table across the database link.
First, since the data is just in two different schemas, are you sure you really need to copy it from one to the other? If attendance_database.attendance_table and payroll_database.payroll_attendance_table are supposed to have the same data, it would make vastly more sense to eliminate one and just create a synonym in payroll_database that points to attendance_database.
grant select on attendance_database.attendance_table
to payroll_database;
create synonym payroll_database.payroll_attendance_table
for attendance_database.attendance_table;
If they are not supposed to have the same data (perhaps the payroll_database.payroll_attendance_table table is supposed to have additional attributes, for example), you'd most likely want to make payroll_database.payroll_attendance_table a child table. Something like
grant references on attendance_database.attendance_table
to payroll_database;
create table payroll_database.payroll_attendance_table (
attendance_id integer references attendance_database.attendance_table( attendance_id ),
<<additional attributes>>
);
If you really, really want to have two separate copies of the same data and to maintain them via a trigger, you could do something like
create or replace trigger trg_pointless_sync
before insert or update or delete on attendance_database.attendance_table
for each row
declare
begin
if inserting
then
insert into payroll_database.payroll_attendance_table( attendance_id,
col1,
col2,
...
colN )
values( :new.attendance_id,
:new.col1,
:new.col2,
...,
:new.colN );
elsif updating
then
update payroll_database.payroll_attendance_table
set col1 = :new.col1,
col2 = :new.col2,
...
colN = :new.colN
where attendance_id = :new.attendance_id;
else
delete from payroll_database.payroll_attendance_table
where attendance_id = :new.attendance_id;
end if;
end;
Of course, trigger based solutions are usually somewhat problematic. They can impose pretty significant performance penalties by transforming nice, fast set-based operations on the source table into recursive row-by-row operations. They can create maintenance issues if, say, a DBA wants to quickly restore attendance_table back to a previous state because someone goofed on some data and doesn't realize that deleting all the data has the side-effect of deleting all the data from the payroll_database table. And they almost always create synchronization issues-- what happens if someone modifies one of the rows in payroll_database, for example? Or if someone updates the PK of attendance_table, the trigger I wrote isn't smart enough to handle that correctly (you could extend it, of course, but there are usually more of these sorts of corner cases than you're likely to code for) so the two tables will eventually be not-quite-in-sync.

SQL Server, Using a stored procedure to insert data

I'm trying to insert data into my database by using a stored procedure but 3 of my columns are using the int identity type and I cannot insert. It keeps saying cannot do this whilst identity insert is off
When IDENTITY_INSERT is on, it just means that you can put your own data in IDENTITY column. It doesn't disable the FK constraint you have on the table. You can delete the FK constraint, or disable it, and risk having logically inconsistent data in your DB, or you can fix your SP so you won't insert any duplicate values.
Something is amiss. Three columns in a single table of type Identity? I'm having difficulty imagining what they could represent, and I have to wonder where the natural keys are.
In any case, IDENTITY_INSERT isn't something you want to putz with casually. It's an administrative feature to allow ad hoc changes to the data, for example bulk loading the database.
If you do actually know what the identities are (as input to your stored procedure) then the table is misdefined, because it's supposed to be the identity source. If you don't know, or you're willing to let the table generate identity values, then you simply don't mention those columns in your INSERT statement. At most, the generated values would be OUTPUT parameters to your stored procedure.

Copy database with data and foreign keys without identity insert on

Scenario:
I have a set of test data that needs to be deployed to our build server daily (our build server database is first overwritten with the current live database, and has all data over a month old removed).
This test data has foreign key references within it which need to stay.
I can't simply switch on IDENTITY_INSERT as the primary keys may clash with data that is already in the database (because we aren't starting from a blank database).
The test data needs to be able to be regenerated fairly regularly, so the though of going through the deploy script and fudging the id columns to be something outlandish (or a negative number for instance) and then changing the related foreign key columns to be the same id every time we regenerate the data doesn't thrill me.
Ideally I would like to know if there is a tool which can scan a database, pick up the foreign key constraints and generate the insert scripts accordingly, something like:
INSERT INTO MyTable VALUES('TEST','TEST');
DECLARE #Id INT;
SET #Id = (SELECT ##IDENTITY)
INSERT INTO MyRelatedTable VALUES(#Id,'TEST')
It sounds like you want to look into an ETL process that copes with the change in id. As you're using SQL Server, you can look at the OUTPUT clause - use this to build up some temporary tables that can map the "old" id to the "new" id for each primary key to map the foreign keys when migrating the "child" tables.

Can I insert into different databases depending on a query in SQLite?

I have two SQLite databases attached into one connection: db1 and db2. I have a view that UNIONS the tables from both databases and adds a column 'database' specifying which database it came from. I am trying to create a trigger on insert into the view that will instead insert into the correct database.
Imagine the following schema for table Data:
id INTEGER PRIMARY KEY,
parent INTEGER,
data TEXT
This would be the schema for the view DataView:
id INTEGER PRIMARY KEY,
database TEXT,
parent INTEGER,
data TEXT
What I have so far:
CREATE TRIGGER DataViewInsertTrigger AFTER INSERT ON DataView
BEGIN
INSERT INTO database.Data
SELECT database
FROM DataView
WHERE id=new.parent
END;
Is what I'm trying to do even possible? If so, how would I finish the trigger?
No, you cannot insert into an entirely different database based on information you get in a trigger. The trigger executes with a context that is specific to the database which invoked it. The other database would be in a completely unrelated file, in SQLite.
The fact that you have a single connection attaching the two doesn't make one available from the other. What would happen if you tripped the trigger from a query made via a connection which only loaded the one DB?
Perhaps you want two tables in the same database?
While Borealid is correct that the trigger itself cannot insert into a different file, what you can do is call a custom sqlite function which itself generates a query to insert into a different file.

a special case when modifing the database

sometimes i face the following case in my database design,, i wanna to know what is the best practice to handle this case:::
for example i have a specific table and after a while ,, when the database in operation and some real data are already entered.. i need to add some required fields (that supposed not to accept null)..
what is the best practice in this situation..
make the field accept null as (some data already entered in the table ,, and scarify the important constraint )and try to force the user to enter this field through some validation in the code..
truncate all the entered data and reentered them again (tedious work)..
any other suggestions about this issue...
It depends on requirements. If the data to populate existing rows for the new column isn't available immediately then I would generally prefer to create a new table and just populate new rows when the data exists. If and when you have all the data for every row then put the new column into the original table.
If possible i would set a default value for the new column.
e.g. For Varchar
alter table table_name
add column_name varchar(10) not null
constraint column_name_default default ('Test')
After you have updated you could then drop the default
alter table table_name
drop constraint column_name_default
A lot will come down to your requirements.
It depends on your application, your database scheme, your entities.
The best way to go about it is to truncate the data and re - enter it again, but it need not be too tedious an item. Temporary tables and table variables could assist a great deal with this issue. A simple procedure comes to mind to go about it:
In SQL Server Management Studio, Right - click on the table you wish to modify and select Script Table As > CREATE To > New Query Editor Window.
Add a # in front of the table name in the CREATE statement.
Move all records into the temporary table, using something to the effect of:
INSERT INTO #temp SELECT * FROM original
Then run the script to keep all your records into the temporary table.
Truncate your original table, and make any changes necessary.
Right - click on the table and select Script Table As > INSERT To > Clipboard, paste it into your query editor window and modify it to read records from the temporary table, using INSERT .. SELECT.
That's it. Admittedly not quite straightforward, but a well - kept database is almost always worth a slight hassle.