When encrypting a column in Sybase does it generates a log? - sql

So lets say I want to encrypt a column in Sybase and that column has 1 or X million records:
1.- Is it as slow as an update?
2.- Does it generates a log?
Using Sybase ASE
I dont't want to drop the table and recreate it.
Thank you

I am not entirely sure about the logging, but I do know that it can take a while on a large table, so it is probably a bit slower than an update on the same unecrypted column, as the encryption adds to the length of the column.
My experience leads me to believe it is at least partially logged, otherwise it would not be recoverable should an error occur during the conversion from plain text to cipher text.
If you do not want to drop and recreate your table, your options are a bit limited.
bcp out/in
bcp the data out of your table.
truncate table
alter table and modify column(s) with encryption
bcp data back into table - Use fast bcp (no triggers or indexes) to avoid logging.
Select Into
select into from your existing table into a temp table, encrypting the column(s) in the process.
truncate table
alter table and modify column(s) with encryption
select into from temp table back into production table.
Adaptive Server Enterprise 15.5 > Encrypted Columns Users Guide > Encrypting Data > Specifying encryption on new tables

Related

Truncation of large table in SQL Server database

I would like to completely clear one table in my SQL Server database.
Unfortunately, the table is large (> 90GB). I am going to use the TRUNCATE statement.
The question is whether I should pay attention to something before?
I am also wondering if it will somehow affect the server's disk space (currently about 110 GB free)?
After all the action, SHRINK DATABASE will probably be necessary.
TRUNCATE TABLE is faster and uses fewer system and transaction log resources
than DELETE with no WHERE clause,
but if you need even faster solution, you can create new version of the table (table1), drop the old table, and rename table1 into table.
R

Run restore on working database. What happens?

What happens when I runned:
zcat /mnt/Postgres/restoreFile.gz | psql my_db
on the working database and after doing ALTER TABLE and other standard things there were problems with duplicated keys. When I stopped it and tried to insert into database then I got duplicates key error because of sequences and constraints. Seems like all data is in but what about the sequences. What really happend with that database?
A normal Postgres backup consists of table design (like create table) and data (like insert) statements. If you run it twice, most design statements will fail. The insert statements would succeed in so far as the data definition allows for duplicate rows.
So restoring a database to a production server would typically result in a lot of duplicate rows in tables without a primary key. Some design changes made after the backup (like changing the owner of a table) may be undone.

DROP TABLE or DELETE TABLE? Which is best practice?

Working on redesigning some databases in my SQL SERVER 2012 instance.
I have databases where I put my raw data (from vendors) and then I have client databases where I will (based on client name) create a view that only shows data for a specific client.
Because of the this data being volatile (Google Adwords & Google DFA) I typically just delete the last 6 days and insert 7 days everyday from the vendor databases. Doing this gives me comfort in knowing that Google has had time to solidify its data.
The question I am trying to answer is:
1. Instead of using views, would it be better use a 'SELECT INTO' statement and DROP the table everyday in the client database?
I'm afraid that by automating my process using the 'DROP TABLE' method will not scale well longterm. While testing it myself, it seems that performance is improved because it does not have to scan the entire table for the date range. I've also tested this with an index on the 'date' column and performance still seemed better with the 'DROP TABLE' method.
I am looking for best practices here.
NOTE: This is my first post. So I am not too familiar with how to format correctly. :)
Deleting rows from a table is a time-consuming process. All the deleted records get logged, and performance of the server suffers.
Instead, databases offer truncate table. This removes all the rows of the table without logging the rows, but keeps the structure intact. Also, triggers, indexes, constraints, stored procedures, and so on are not affected by the removal of rows.
In some databases, if you delete all rows from a table, then the operation is really truncate table. However, SQL Server is not one of those databases. In fact the documentation lists truncate as a best practice for deleting all rows:
To delete all the rows in a table, use TRUNCATE TABLE. TRUNCATE TABLE
is faster than DELETE and uses fewer system and transaction log
resources. TRUNCATE TABLE has restrictions, for example, the table
cannot participate in replication. For more information, see TRUNCATE
TABLE (Transact-SQL)
You can drop the table. But then you lose auxiliary metadata as well -- all the things listed above.
I would recommend that you truncate the table and reload the data using insert into or bulk insert.

Teradata Drop Column returns with "no more room"

I am trying to drop a varchar(100) column of a 150 GB table (4.6 billion records). All the data in this column is null. I have 30GB more space in the database.
When I attempt to drop the column, it says "no more room in database XY". Why does such an action needs so much space?
The ALTER TABLE statement needs a temporary storage for the altered version before overwriting the original table. I guess the the table that you are trying to alter occupies at least 1/3 of your total storage size
This could happen for a variety of reasons. It's possible that one of the AMP's in your database are full, this would cause that error even with a minor table alteration.
try running the following SQL to check space
select VProc, CurrentPerm, MaxPerm
from dbc.DiskSpace
where DatabaseName='XY';
also, you should check to see what column your primary index is on in this very large table. if the table is not skewed properly, you could also run into space issues when trying to alter a table or by running a query against it.
For additional suggestions I found a decent article on the kind of things you may want to investigate when the "no more room in database" error occurs - Teradata SQL Tutorial. Some of the suggestions include:
dropping any intermediary work or "sandbox" tables
implementing single value or multi-value compression.
dropping unwanted/unnecessary secondary indexes
removing data in dbc tables like accesslog or dbql tables
remove and archive old tables that are no longer used.

Order SQL Azure Table Columns via SSMS

I know you can go into the design view of a table in SQL Server Management Studios and reorder columns as they appear in the design view, however this isn't possible with SQL Azure as the option is disabled. Is there a way to modify SQL Azure tables so that you can reorder their columns as they appear in the design view?
I have been running a number of database upgrades over the last few months to support new requirements and would like to reorder the way the columns appear in design view so they're easier to read, i.e. so they start with a primary key, followed by foreign keys, then normal columns and end with the added by, modified by fields. Its purely to make the tables more readable as I manage them over time.
Just run a script against the table. Its a bit of pseudocode but you should get the idea.
CREATE TABLE TableWithDesiredOrder(PK,FK1,FK2,COL1,COL2)
INSERT INTO TableWithDesiredOrder(PK,FK1,FK2,COL1,COL2....)
SELECT PK,FK1,FK2,COL1,COL2.... FROM OriginalTable
DROP TABLE OriginalTable
Finally Rename the table
sp_Rename TableWithDesiredOrder, OriginalTable
Just another option: I use SQL Delta to propagate my db changes from dev db up to Azure db. So in this case, I just change the col order locally using SSMS GUI, and SQL Delta will do the createnew>copytonew>dropold for me, along with my other local changes. (In Project Options, I set Preserve Column Order=Yes.)
I experienced the same with Azure SQL Database, basically my view changes with ALTER were not taken when did a SELECT * from the view, or the column headers were mixed with the column values.
In order to fix it I dropped the view and re-created it again. That worked.