in ms-access, how to write a query / DDL to change multiple column names with a single query.
You have to create a new table, copy data, drop old table
ALTER TABLE does not support multiple ...ALTER COLUMN clauses
You could ADD multiple columns on one go, populate then, then drop the old columns.
Multiple ADDs are supported.
Related
I have created mutiple tables inside a database using PhpMyAdmin. But I cant find out how to do this:
It should not be allowed with two identical names in one of my columns. The column is called "name".
And I have one column called "prod_time" and one called "stock_ant" that must be filled in. (Like it's not going to be an option to leave it blank or with zero value)
Is there multiple queries to use for these actions?
If you want a column to have unique values, use a unique constraint or index. For instance:
alter table t add constraint unq_t_name unique (name);
If you don't want columns to have NULL values, then declare them NOT NULL when you create the table.
What is the statement to alter a table which holds about 10 million rows, adding a guid column which will hold a unique identifier for each row (without being part of the pk)
What datatype should the global unique identifier column be?
Is there a procedure which creates it?
How will it auto incremented or produced everytime a new record is inserted?
Break it down into the separate stages
First, we need a new column:
alter table MyTable
add guid_column raw(32) default sys_guid();
Then update the existing rows:
update MyTable
set guid_column = sys_guid();
Use identity columns feature of oracle 12c to add a column to the table which auto increments upon adding new rows to the table.
An ideal way to handle this task is to:
a) CREATE a "new" table with structure similar to the source table using CREATE TABLE AS (CTAS statement) with a new "identity column" instead of adding identity column using ALTER statement on existing table.
b) CTAS works faster compared to running ALTER on existing table.
c) After confirming that the "new" table has all the data from the source table along with an column containing unique values and all the indexes and constraints then, you can drop the original table.
Another way to avoid creating constraints, indexes present on original table onto the new table is to create an empty table with all constraints, indexes and identity column. Let DBA extract data from the original table and import it into the "new" table.
Benefits:
This approach will ensure that none of the objects dependent on the source table become INVALID which generally hampers some features of the application(s).
I have a database which contains more than 30m records, and I need to add two new columns to the database. The problem is that I need these columns to be NOT NULL, and without a default value. I thought that I would just add these columns without the NOT NULL constraint, fill them with data, then add the constraint, but Redshift doesn't support that. I have an other solution in my mind, but I wonder if there is any more simpler solution than this?
Create the two new columns with NOT NULL and DEFAULT
Filling the columns with data
Creating an empty table with the same columns as the target DB. (Of course the two new columns would be just NOT NULL)
Inserting everything from the target DB to the new DB.
Dropping the target DB
Renaming the new DB to the target.
I would suggest:
Existing Table-A
Create a new Table-B that contains the new columns, plus an identity column (eg customer_id) that matches Table-A.
Insert data into Table-B (2 columns + identity column)
Use CREATE TABLE AS to simultaneously create a new Table-C (specifying DISTKEY and SORTKEY) while querying Table-A and Table-B via a JOIN on the identity column
Verify contents of Table-C
VACCUM Table-C (shouldn't be necessary, but just in case, and it should be quick)
Delete Table-A and Table-B
Rename Table-C to desired table name (which was probably the same as Table-A)
In Summary: Existing columns in Table-A + Extra columns in Table-B ➞ Table-C
Reasoning:
UPDATE statements do not run very well in Redshift. It requires marking existing data rows for each column as 'deleted', then appending new rows to the end of each column. Doing lots of UPDATES will blow-out the size of a table and it will become unsorted. It's also relatively slow. You would need to Deep Copy or VACUUM the table afterwards to fix things.
Using CREATE TABLE AS with a JOIN will generate all "final state" data in one query and the resulting table will be sorted and in a 'clean' state
The process gives you a chance to verify the content of Table-C before committing to the switchover. Very handy for debugging the process!
See also: Performing a Deep Copy - Amazon Redshift
I have a table that exists currently, and I want to add in a column from another table via a join. Is there a way to add it to this table and not have to create a new one? I know I could easily make another table with by joining the data in but I wasn't sure if there was a way to insert that kind of data to an existing one.
I have a students table in postgres that is populated via an external source. Each night we populate the students_swap table, and then after the long running operation is complete we rename it to students and the original table then becomes students_swap to be used the next day.
The problem with this is that when we add a new column or index to the original table we must remember to also do so on the swap table. I am attempting to automate some of this w/ the following:
-- Drop the swap table if it's already there...
DROP TABLE IF EXISTS students_swap;
-- Recreate the swap table using the original as a template...
CREATE TABLE students_swap AS SELECT * FROM students WHERE 1=2;
... populate the swap table ....
ALTER TABLE students RENAME TO students_temp;
ALTER TABLE students_swap RENAME TO ps_students;
ALTER TABLE students_temp RENAME TO students_swap;
This works well for creating the table structure but no indices are created for the swap table.
My question is how do I copy all of the indexes in addition to the table structure to make sure my original table and swap table stay in sync?
Use create table ... like instead:
CREATE TABLE students_swap (LIKE students INCLUDING ALL);
This will include indexes, primary keys and check constraints but will not re-create the foreign keys.
Edit:
INCLUDING ALL will also copy the default settings for columns populated by sequences (e.g. a column defined as serial). It sounds as if you want that. If you do not want that, then use INCLUDING INDEXES INCLUDING CONSTRAINTS instead.