I have two tables.
Table1
------
ID(identity column)
Name
FC
Table2
------
ID
Name
I need to insert the values for Table1 from Table2
Table 2 will have 1000 records.
So I am using insert sql server 2008 statement.
I an using
insert into Table1(Name,FC)
select Name, SELECT value as FC from Table2
The value in FC should be generated manually.
I need to check the value generated in FC does not already exists in the database.if it is it should create a new value for FC and insert
I am working on scalar functions.
Any ideas will be appreciated.
In this scenario, you can use unique Identifier i.e. NEWID() in sql server....The major advantage of using GUIDs is that they are unique across all space and time. This comes in handy if you're consolidating records from multiple SQL Servers into one table, as in a data warehousing situation. GUIDs are also used heavily by SQL Server replication to keep track of rows when they're spread out among multiple SQL Servers....
You can implement guid in your query by following way.......
INSERT INTO Table1(Name,FC)SELECT Name, NEWID() from Table2
Related
I want to update numerical columns of one table based on matching string columns from another table.i.e.,
I have a table (let's say table1) with 100 records containing 5 string (or text) columns and 10 numerical columns. Now I have another table that has the same structure (columns) and 20 records. In this, few records contain updated data of table1 i.e., numerical columns values are updated for these records and rest are new (both text and numerical columns).
I want to update numerical columns for records with the same text columns (in table1) and insert new data from table2 into table1 where text columns are also new.
I thought of taking an intersect of these two tables and then update but couldn't figure out the logic as how can I update the numerical columns.
Note: I don't have any primary or unique key columns.
Please help here.
Thanks in advance.
The simplest solution would be to use two separate queries, such as:
UPDATE b
SET b.[NumericColumn] = a.[NumericColumn],
etc...
FROM [dbo].[SourceTable] a
JOIN [dbo].[DestinationTable] b
ON a.[StringColumn1] = b.[StringColumn1]
AND a.[StringColumn2] = b.[StringColumn2] etc...
INSERT INTO [dbo].[DestinationTable] (
[NumericColumn],
[StringColumn1],
[StringColumn2],
etc...
)
SELECT a.[NumericColumn],
a.[StringColumn1],
a.[StringColumn2],
etc...
FROM [dbo].[SourceTable] a
LEFT JOIN [dbo].[DestinationTable] b
ON a.[StringColumn1] = b.[StringColumn1]
AND a.[StringColumn2] = b.[StringColumn2] etc...
WHERE b.[NumericColumn] IS NULL
--assumes that [NumericColumn] is non-nullable.
--If there are no non-nullable columns then you
--will have to structure your query differently
This will be effective if you are working with a small dataset that does not change very frequently and you are not worried about high contention.
There are still a number of issues with this approach - most notably what happens if either the source or destination table is accessed and/or modified while the update statement is running. Some of these issues can be worked around other ways but so much depends on the context of how the tables are used that it is difficult to provide a more effective generically-applicable solution.
I have two tables in two Databases having identical schema. The two databases are on different servers at different location. Now the data can be inserted and updated in any of the two databases table. The requirement is to sync the two tables in different databases so that they are always having the updated information.
The primary key column will always be unique in either database table.
How to achieve this via SSIS ?
Kindly guide.
You can achieve it with 2 Script Tasks. In the first one:
-- what exists in A and not in B
SELECT * INTO DB1.temp.TBL_A_except FROM
(
SELECT pk FROM DB1.schema1.TBL_A
EXCEPT
SELECT pk FROM DB2.schema2.TBL_B
);
-- what exists in B and not in A
SELECT * INTO DB2.temp.TBL_B_except FROM
(
SELECT pk FROM DB2.schema2.TBL_B
EXCEPT
SELECT pk FROM DB1.schema1.TBL_A
);
Second one:
INSERT INTO DB2.schema2.TBL_B
SELECT * FROM DB1.temp.TBL_A_except;
INSERT INTO DB1.schema1.TBL_A
SELECT * FROM DB2.schema2.TBL_B_except;
DROP TABLE DB1.temp.TBL_A_except;
DROP TABLE DB2.schema2.TBL_B;
If you really want to achieve this with SSIS transformation techniques, I'd use two data flows with 2xCache Connection Manager as our temp table 1 and 2. First one to save data into cache, second to load from cache into tables.
or
Two data flows. Source -> Lookup -> Destination.
Implement lookup to check the second table for existance of PK. If for a record Tbl_A there is no such PK in Tbl_B it means you have to insert this row into Tbl_B. No Match Output, directs row to Destination.
I have 100s of millions of unique rows spread across 12 tables in the same database. They all have the same schema/columns. Is there a relatively easy way to combine all of the separate tables into 1 table?
I've tried importing the tables into a single table, but given this is a HUGE size of files/rows, SQL Server is making me wait a long time as if I was importing from a flat file. There has to be an easier/faster way, no?
You haven't given much info about your table structure, but you can probably just do a plain old insert from a select, like below. The example would take all records that don't already exist Table2 and Table3, and insert them into Table1. You could do this to merge everything from all your 12 tables into a single table.
INSERT INTO Table1
SELECT * FROM Table2
WHERE SomeUniqueKey
NOT IN (SELECT SomeUniqueKey FROM Table1)
UNION
SELECT * FROM Table3
WHERE SomeUniqueKey
NOT IN (SELECT SomeUniqueKey FROM Table1)
--...
Do what Jim says, but first:
1) Drop (or disable) all indices in the destination table.
2) Insert rows from each table, one table at a time.
3) Commit the transaction after each table is appended, otherwise much disk space will be taken up in case of a possible rollback.
4) Renable or recreate the indices after you are done.
If there is a possibility of duplicate keys, you may need to retain an index on the key field and have a NOT EXISTS clause to hold back the duplicate records from being added.
I am working on a project where my requirement is just update the database from local server to the destination server (all tables, views, functions, rows and stored procedures).
Now I want to compare the local database table with the destination database table and insert the newly inserted rows from local database to the destination table.
E.g. : I have a database dbsource and dbDestination and both contain a table table1. Now I insert new rows into dbsource.table1.
Now I want to compare both database tables and insert the new rows into the destination table.
Please help me .
Why reinvent the wheel?? There are lots of commercial applications out there that already do this for you:
Red-Gate SQL Data Compare
ApexSQL Data Diff
Assuming both Table1 tables have a Primary Key (unique) column here's how you can implement that. I name the PK column ID:
INSERT INTO DBDESTINATION.<SCHEMA_NAME>.TABLE1
(SELECT T1.* FROM DBSOURCE.<SCHEMA_NAME>.TABLE1 AS T1
LEFT OUTER JOIN DBDESTINATION.<SCHEMA_NAME>.TABLE1 AS T2 ON T1.ID=T2.ID
WHERE T2.ID IS NULL)
Hope that helps.
I want to save new data from access tables to sql tables without overwriting old data in sql. Data table format for access and sql is same.(using Visual Basic)
If there is a unique id on the row, then you can check whether the value is already in the database (logically equivalent to this):
insert into data_table dt
select * from access_table a
where a.id not in (select id from data_table)
A unique id, of course, is any subset of fields in the row, identifying a unique row.