Is there any way to copy all column values from one table to another except the Identity column, without mentioning all the rest of the column names?
I have a table with 63 columns. I created a temporary table with -
SELECT * INTO #TmpWide FROM WideTable WHERE 1 = 0
Now I want to copy some data from WideTable to #TmpWide. I need all the columns of WideTable except the Identity Id column, because I want the copied data to have their own sequential Id's in #TmpWide from 1 to onward. Is it possible without mentioning the (63-1) column names?
You could try dropping the column after the table is created:
SELECT * INTO #TmpWide FROM WideTable WHERE 1=0
ALTER TABLE #TmpWide DROP COLUMN [Id]
This does feel a little ugly or hack-y, but it should do the trick.
There isn't a way to do that, but also it is a bad idea to use * in a situation like this. If WideTable changes you will be forced to change the stored procedures that SELECT * from it. I wrote hundreds of stored procs like this and all it did was create nightmares I'm still dealing with today. Good luck.
Related
I have a table with more than 1.5 million records, in which I have two columns, A and B. Mistakenly the column values of A got inserted into the column B and column B's values got inserted to A.
Recently only we found the issue. What will be the best option to correct this issue? Rename the column names interchangingly (I don't know how it can be possible, since if we nename A to B, when B already exists), or swapping the values contained in the two columns?
Hi, You can have the below query to swap the columns,
UPDATE table_name SET A = B, B = A;
But you have huge amount of date in this case renaming will be good. But renaming of column name because of data issue is not a right solution. So you can have above update query to update your data.
Before updating take a backup of table which you are updating using the query,
CREATE TABLE table_name_bkp AS SELECT * FROM table_name;
Always have a backup while playing with original data which will not mess up
15 lakh rows aren't a big deal for SQL server. Switching column names have many cons in relational DB such as index, foreign Key and also you may have to do lots of impacts. So, I would like to suggest to go for traditional path. Simply do the update.
I had about 20,000,000 records
in a table (random data), and then I added empty column to that table...
but when I update that table to fill that column, the process was broken down..
I tried to use the cursor and the index but no results..
do you have any fast solution or any alternative solution?
Thank you in advance :)
Maybe the fastest way would be to create new_table as select * from existing table, and then inside the select statment of CTAS, calculate the value of the new column. After that, you can rename old table to something like table_bckp, then rename new table to the original table name, and then apply constraints, indexes, and other scripts previously saved from old table definitions.
I want to swap columns within Visual Fox Pro 9 in table_1 before inserting its rows into table_2 so as to avoid data losses caused by datatype variations. I tried these two options based on other solutions on stackoverflow, but I get syntax error messages for both command inputs. The name field is of datatype = character(5)and it needs to be after the subdir field.
ALTER table "f:\csp" modify COLUMN name character(5) after subdir
ALTER table "f:\csp" change COLUMN name name character(5) after subdir
I attempted these commands based on solutions here:
How to move columns in a MySQL table?
You never need to change the column order, and you never should rely on column order to do something.
For inserting into another table from this one you could simply select the columns in the order you desired (and their column names do not even need to be the same in the case of "insert ... select ... "). ie:
insert into table_2 (subdir, name) ;
select subdir, name from table_1
Another way is to use the xBase commands like:
select table_2
append from table_1
In the case of latter, VFP would do the match on column names.
All in all, relying on column ordering is dangerous. If you really want to do that, then you can still do, in a number of ways. One of them is to select all data into a temp table, recreate the table in the order you want and fill back from temp (might not be as easy as it sounds if there are existing dependencies such as referential integrity - also you need to recreate the indexes).
I have a table people with less than 100,000 records and I have taken a backup of this table using the following:
create table people_backup as select * from people
I add some new records to my people table over time, but eventually I want to merge the records from my backup table into people. Unfortunately I cannot simply DROP my table as my new records will be lost!
So I want to update the records in my people table using the records from people_backup, based on their primary key id and I have found 2 ways to do this:
MERGE the tables together
use some sort of fancy correlated update
Great! However, both of these methods use SET and make me specify what columns I want to update. Unfortunately I am lazy and the structure of people may change over time and while my CTAS statement doesn't need to be updated, my update/merge script will need changes, which feels like unnecessary work for me.
Is there a way merge entire rows without having to specify columns? I see here that not specifying columns during an INSERT will direct SQL to insert values by order, can the same methodology be applied here, is this safe?
NB: The structure of the table will not change between backups
Given that your table is small, you could simply
DELETE FROM table t
WHERE EXISTS( SELECT 1
FROM backup b
WHERE t.key = b.key );
INSERT INTO table
SELECT *
FROM backup;
That is slow and not particularly elegant (particularly if most of the data from the backup hasn't changed) but assuming the columns in the two tables match, it does allow you to not list out the columns. Personally, I'd much prefer writing out the column names (presumably those don't change all that often) so that I could do an update.
I have a table with 32 Million rows and 31 columns in PostgreSQL 9.2.10. I am altering the table by adding columns with updated values.
For example, if the initial table is:
id initial_color
-- -------------
1 blue
2 red
3 yellow
I am modifying the table so that the result is:
id initial_color modified_color
-- ------------- --------------
1 blue blue_green
2 red red_orange
3 yellow yellow_brown
I have code that will read the initial_color column and update the value.
Given that my table has 32 million rows and that I have to apply this procedure on five of the 31 columns, what is the most efficient way to do this? My present choices are:
Copy the column and update the rows in the new column
Create an empty column and insert new values
I could do either option with one column at a time or with all five at once. The columns types are either character varying or character.
The columns types are either character varying or character.
Don't use character, that's a misunderstanding. varchar is ok, but I would suggest just text for arbitrary character data.
Any downsides of using data type "text" for storing strings?
Given that my table has 32 million rows and that I have to apply this
procedure on five of the 31 columns, what is the most efficient way to do this?
If you don't have objects (views, foreign keys, functions) depending on the existing table, the most efficient way is create a new table. Something like this ( details depend on the details of your installation):
BEGIN;
LOCK TABLE tbl_org IN SHARE MODE; -- to prevent concurrent writes
CREATE TABLE tbl_new (LIKE tbl_org INCLUDING STORAGE INCLUDING COMMENTS);
ALTER tbl_new ADD COLUMN modified_color text
, ADD COLUMN modified_something text;
-- , etc
INSERT INTO tbl_new (<all columns in order here>)
SELECT <all columns in order here>
, myfunction(initial_color) AS modified_color -- etc
FROM tbl_org;
-- ORDER BY tbl_id; -- optionally order rows while being at it.
-- Add constraints and indexes like in the original table here
DROP tbl_org;
ALTER tbl_new RENAME TO tbl_org;
COMMIT;
If you have depending objects, you need to do more.
Either was, be sure to add all five at once. If you update each in a separate query you write another row version each time due to the MVCC model of Postgres.
Related cases with more details, links and explanation:
Updating database rows without locking the table in PostgreSQL 9.2
Best way to populate a new column in a large table?
Optimizing bulk update performance in PostgreSQL
While creating a new table you might also order columns in an optimized fashion:
Calculating and saving space in PostgreSQL
Maybe I'm misreading the question, but as far as I know, you have 2 possibilities for creating a table with the extra columns:
CREATE TABLE
This would create a new table and filling could be done using
CREATE TABLE .. AS SELECT.. for filling with creation or
using a separate INSERT...SELECT... later on
Both variants are not what you seem to want to do, as you stated solution without listing all the fields.
Also this would require all data (plus the new fields) to be copied.
ALTER TABLE...ADD ...
This creates the new columns. As I'm not aware of any possibility to reference existing column values, you will need an additional UPDATE ..SET... for filling in values.
So, I' not seeing any way to realize a procedure that follows your choice 1.
Nevertheless, copying the (column) data just to overwrite them in a second step would be suboptimal in any case. Altering a table adding new columns is doing minimal I/O. From this, even if there would be a possibility to execute your choice 1, following choice 2 promises better performance by factors.
Thus, do 2 statements one ALTER TABLE adding all your new columns in on go and then an UPDATE providing the new values for these columns will achieve what you want.
create new column (modified colour), it will have a value of NULL or blank on all records,
run an update statement, assuming your table name is 'Table'.
update table
set modified_color = 'blue_green'
where initial_color = 'blue'
if I am correct this can also work like this
update table set modified_color = 'blue_green' where initial_color = 'blue';
update table set modified_color = 'red_orange' where initial_color = 'red';
update table set modified_color = 'yellow_brown' where initial_color = 'yellow';
once you have done this you can do another update (assuming you have another column that I will call modified_color1)
update table set 'modified_color1'= 'modified_color'