I have an application that is ready to go live, once we take data from a MS Access DB and import it into SQL Server 2005. I have used the Migration - Access tool to get the Access db into SQL Server, but now I need to take the data from that table and put it into the tables that our app is going to use. Is there a T-Sql way to Insert multiple rows, while at the same time 're-mapping' the data?
For example
SELECT ID, FIRST_NAME, LAST_NAME
INTO prod_users (user_id, first_name, last_name)
FROM test_users
I know that select * into works when the column names are the same, but the
prod_users (column names, ..., ...)
part is what I really need to get to work.
Any ideas?
I believe the SELECT INTO syntax is used to create new tables. If you want to map data from the tables you just imported to some other existing tables, try a plain INSERT. For example:
INSERT INTO prod_users (user_id, first_name, last_name)
SELECT ID, FIRST_NAME, LAST_NAME
FROM test_users
The mapping of columns from test_users to prod_users is based on the order that they are listed, i.e. the first column in "prod_users (column_names, ...)" matches the first column in the "SELECT other_col_names, ...", the second matches the second, etc. Therefore, in the code sample above, ID is mapped to user_id, and LAST_NAME is mapped to last_name, and so on.
Just make sure you have the same number of columns in each list and that the column types match (or can be converted to match). Note that you don't have to specify all the columns in either table (as long as the destination table has valid defaults for the unspecified columns).
See the INSERT syntax for details; the part relevant to your question is the "execute_statement" expression.
INSERT and SELECT are the magic keywords:
insert into new_table (list of columns) select columnA, functionB(),
(correlated_subquery_C) from table_or_join where critera_expression_is_true
Maybe you can be more specific about what you mean by re-mapping?
Base on your comment, a more specific query is:
insert into new_table (user_id, firstname, lastname)
select id, first_name, last_name from old_database..old_table
Related
I just want to know if it is possible to insert data into a table that has columns for example ID, FIRST_NAME, AGE, SEX, SALARY but I want to insert into all columns except the column id.
Normally as I know I need to set this code
INSERT_INTO TABLE_NAME (FIRST_NAME, AGE, SEX, SALARY)
VALUES (....);
but it will take a long time if there is a lot of columns...
Is there any code that will grant me time?
you can pass null value for the columns you don't want to set.
Entry will be inserted with the default value if provided in the schema, or null is field is nullable
This question already has answers here:
Can you SELECT everything, but 1 or 2 fields, without writer's cramp?
(12 answers)
Closed 5 years ago.
I am using Oracle Database and I need to realize a query which retrieves all the values of a table record (for a specific WHERE condition), except for one which is known.
Imagine to have the following table:
Sample table
Where you do not want to retrieve the "Age" column, but where - in next releases of the software - the table could have more columns respect to the ones actually present.
Is there any command in Oracle which excludes a specific column (always known, as in the example "Age") and allows me to retrieve all the other values?
Thanks in advance!
You can make that particular column Invisible using following query:
alter table TABLE_NAME modify COLUMN_NAME INVISIBLE;
This will exclude that column from select * statement unless and until you specify that particular column in select clause like below:
select COLUMN_NAME from TABLE_NAME;
From Your sample data:
alter table SAMPLE_TABLE modify Age INVISIBLE;
select * FROM SAMPLE_TABLE will produce
select FirstName, LastName, Address, City, Age from SAMPLE_TABLE will produce:
There are several approaches
1)You can set column UNUSED.It won't be retrieved (and it wont be used) with the queries. This would be permanent. You can't get then column back, the only allowed op would be DROP UNUSED COLUMNS.
ALTER TABLE sample_table SET UNUSED(age);
2)You can set column INVISIBLE, this is temporary. It won't be retrieved, unless you explicitly reference it in SELECT query.
ALTER TABLE sample_table MODIFY age INVISIBLE;
// to change it back to VISIBLE
ALTER TABLE sample_table MODIFY age VISIBLE;
3)Create VIEW without age column and then query view instead of querying TABLE.
CREATE VIEW sample_table_view AS
SELECT first_name, last_name, address, city FROM sample_table;
Have an issue. I have a database called DATA. Within it are multiple tables. One called MASTER and others called temp1 .
MASTER has columns called first, middle, last, dob, address, city, state, zip, phone, cell
Temp1 has essentially the same columns more or less, but in different orders, different column names, more columns than exist in MASTER, etc...
I'd like to be able to write a TSQL script that I can execute to move data from temp1 to MASTER, but map which column gets what data.
Using something like:
INSERT INTO MASTER
SELECT * from temp1
just blows up and name is in wrong field and its a mess due to the columns within temp1 which are, and in a jumbled order:
dateofbirth, lastname, firstname, middlename, telephone, cell, address, city, state, zip
What'd I'd like to do is be able to map the columns while they are transferred... like if I was using the import GUI.
so firstname to first, lastname to last, cell to cell, address to address, dateofbirth to DOB... etc. and some columns totally skipped.. but you see where its going :-)
Is this possible?? Or am I stuck using the GUI??
All the GUI does is generate SQL for you, so rest assured, this is possible.
Here's what you want:
INSERT INTO [MASTER] (
first, middle, last, dob, address, city, state, zip, phone, cell, ...
)
SELECT
firstname, middlename, lastname, dateofbirth, address, city, state, zip, telephone, cell, ...
FROM
[temp1]
SQL Server will use the indexes of the named columns to map data, disregarding names and types. If there is a type mismatch it will try to perform an implicit conversion or fail with a runtime error.
You can use a select statement with an insert statement which allows you to order which columns go where.
i.e.
INSERT INTO [MASTER] (FIRSTNAME,LASTNAME)
SELECT FIRST,LAST FROM TEMP1
You can have each column list (in both the insert and the select statements) in any order - regardless of the column structure of the table.
In my view, we need to map columns programatically or manually to insert data accurately.
I have one simple options, not sure if its best practices:
Can we generate insert statement for each row of data with proper column mappings?
For Example:
Export all the rows from TEMP1 table to Excel
Write an Insert Statement for all values of Row1 on Excel using CONCATENATE.
=CONCATENATE("INSERT INTO MASTER(Lname,Fname,Address) Values('",A1,"','",B1,"','",C1,"')
Use Drag options to Generate Insert Statement for all the rows.
Copy all the Insert Statements with values and Prepare for Inserting into Master table.
I want to select rows from a table called Users where the column Logon is equal to "foo" - However, I also want to return "Foo" or "FOO".
I could do something like:
SELECT Id, Name FROM Users WHERE UPPER(Logon) = 'FOO';
And then convert my parameter to uppercase. However, in our code we have literally hundreds of spots where we'd have to update this.
Is there a way to make the table schema itself case-insensitive so these queries will just work without modification? Thanks!
UPDATE
I'd rather not change case-sensitivity in the entire database or at the session level. Changing SQL queries is hard since we use the .NET Entity Framework and have LINQ queries against this table all over the place. It doesn't appear the EF supports automatically converting case unless you want to change every LINQ query as well.
I'd rather not change case-sensitivity in the entire database or at the session level.
Is there a way to make the table schema itself case-insensitive so these queries will just work without modification
Yes, it is possible but from version Oracle 12cR2 and above. You could define it on many levels (column, table, schema):
-- default
CREATE TABLE tab2(i INT PRIMARY KEY, name VARCHAR2(100));
INSERT INTO tab2(i, name) VALUES (1, 'John');
INSERT INTO tab2(i, name) VALUES (2, 'Joe');
INSERT INTO tab2(i, name) VALUES (3, 'Billy');
SELECT /*csv*/ *
FROM tab2
WHERE name = 'jOHN' ;
/*
"I","NAME"
no rows selected
*/
SELECT /*csv*/
column_id,
column_name,
collation
FROM user_tab_columns
WHERE table_name = 'TAB2'
ORDER BY column_id;
/*
"COLUMN_ID","COLUMN_NAME","COLLATION"
1,"I",""
2,"NAME","USING_NLS_COMP"
*/
Column-level:
CREATE TABLE tab2(i INT PRIMARY KEY, name VARCHAR2(100) COLLATE BINARY_CI);
INSERT INTO tab2(i, name) VALUES (1, 'John');
INSERT INTO tab2(i, name) VALUES (2, 'Joe');
INSERT INTO tab2(i, name) VALUES (3, 'Billy');
SELECT /*csv*/ *
FROM tab2
WHERE name = 'jOHN' ;
/*
"I","NAME"
1,"John"
*/
-- COLUMN LEVEL
SELECT /*csv*/
column_id,
column_name,
collation
FROM user_tab_columns
WHERE table_name = 'TAB2'
ORDER BY column_id;
/*
"COLUMN_ID","COLUMN_NAME","COLLATION"
1,"I",""
2,"NAME","BINARY_CI"
*/
Table-level:
CREATE TABLE tab2(i INT PRIMARY KEY, name VARCHAR2(100))
DEFAULT COLLATION BINARY_CI;
Schema-level:
CREATE USER myuser IDENTIFIED BY myuser
DEFAULT TABLESPACE users
DEFAULT COLLATION BINARY_CI;
Answering my own question because I didn't feel that either of the proposed answers really addressed the issue.
Oracle does not support the concept of a case-insensitive column type, and case sensitivity can only be controlled at the database or session level. There's a few ways around this, such as making the column virtual or reading through a view, but each of them would also require you to cast the right operand as well (such as WHERE X = UPPER(:p1).
I ended up just updating my database (which was a list of usernames from Active Directory) to have the correct cases, so I no longer have to compare case insensitive.
I don't think you can do it just for one column. You can try the following approach : make your Logon column virtual as UPPER(s_Logon) (create s_Logon, copy all the values from existing Logon column , drop Logon, create it as virtual). I believe it's gonna work for SELECTs, but for insert/updates you will need to access 's_Logon'. Hope that makes sense.
You could set up a view on your table with all columns identical, except for the affected column which would be upshifted - something like:
create view v_Users as
select Id, Name, UPPER(Logon) Logon, ...
FROM Users
- then do a global replace on your source code to change the table name to the view name - although if your table is called Users, that could be quite dangerous...
I have a table connecting principals to their roles. I have come upon a situation where I need to add a role for each user. I have a statement SELECT id FROM principals which grabs a list of all the principals. What I want to create is something like the following:
INSERT INTO role_principal(principal_id,role_id)
VALUES(SELECT id FROM principals, '1');
so for each principal, it creates a new record with a role_id=1. I have very little SQL experience, so I dont know if I can do this as simply as I would like to or if there is some sort of loop feature in SQL that I could use.
Also, this is for a mySQL db (if that matters)
Use VALUES keyword if you want to insert values directly. Omit it to use any SELECT (where column count and type matches) to get the values from.
INSERT INTO role_principal(principal_id,role_id)
(SELECT id, 1 FROM principals);
To avoid duplicates is useful to add a subquery :
INSERT INTO role_principal(principal_id,role_id)
(SELECT id, 1 FROM principals p
WHERE NOT EXISTS
(SELECT * FROM role_principal rp WHERE rp.principal_id=p.id AND role_id=1)
)