pushing a datatable from memory into SQL server using ADO.NET - sql-server-2005

I have a datatable in memory that I got from a non SQL source.
What is the most elegant way, using ADO.NET to push it "as is into a new SQL (2005) server table?

You'd need to do this as a series of steps. Firstly, creating the table via some dynamic SQL. Then you'd need to load the information from memory into the newly created table, potentially using a BULK INSERT.

Or create the table on the SQL Server, read one row at a time, and add it to the table.

CREATE TABLE based on the structure
SQLBulkCopy it

Related

How to insert R dataframe into existing table in SQL Server

After trying a few different packages and methods found online, I am yet to find a solution that works for inserting a dataframe from R into an existing table in SQL Server.
I've had great success doing this with MySQL, but SQL Server seems to be more difficult.
I have managed to write a new table using the DBI package, but I can't find a way to insert into using this method. Looking at the documentation, there doesn't seem to be a way of inserting.
As there are more than 1000 rows of data, using sqlQuery from the RODBC package also seems unfeasable.
Can anybody suggest a working method for inserting large amounts of data from a dataframe into an existing SQL table?
I've had similar needs using R and PostGreSQL using the r-postgres-specific drivers. I imagine similar issues may exist with SQLServer. The best solution I found was to write to a temporary table in the database using either dbWriteTable or one of the underlying functions to write from a stream to load very large tables (for Postgres, postgresqlCopyInDataframe, for example). The latter usually requires more work in terms of defining and aligning SQL data types and R class types to ensure writing, wheres dbWriteTable tends to be a bit easier. Once written to a temporary table, to then issue an SQL statement to insert into your table as you would within the database environment. Below is an example using high-level DBI library database calls:
dbExecute(conn,"start transaction;")
dbExecute(conn,"drop table if exists myTempTable")
dbWriteTable(conn,"myTempTable",df)
dbExecute(conn,"insert into myRealTable(a,b,c) select a,b,c from myTempTable")
dbExecute(conn,"drop table if exists myTempTable")
dbExecute(conn,"commit;")

SQL Server: Create a duplicate of a database without the data

I have a database called AQOA_Core with huge amount of data.
I have a newly created database called AQOA_Core1 which is basically empty. I want to write a query to duplicate AQOA_Core to AQOA_Core1 without the data. I guess to be precise I want to create a skeleton of the primary database into the secondary database.
PS: I use Toad for my database operations.
You can use SQL Server Script Wizard for scripting database objects. You can exclude data, and select the database object types you want to include in your script
Please check the SQL Server guide I referenced above,
I hope it helps you

SSIS import all Paradox tables into SQL Server 2012

Writing my first SSIS package in VS 2012 and have managed to get it to connect to he Paradox tables without any problems.
What I need to do is go though each table and import the data into a corresponding table on a SQL Server database. There is no transformation of the data, as the table structures are the same. All that needs to be done is the data in the SQL Server database must first be deleted and then the data from the Paradox tables inserted.
I can connect one table in Paradox to one table in SQL Server but I want to do them all, please tell me I don't need a separate data task for each
Thanks
Ken
Try using a ForEach Loop Container to enumerate all the tables in your database.

SQL Server 2008 INSERT Optimization

I've to INSERT a lot of rows (more than 1.000.000.000) to a SQL Server data base. The table has an AI Id, two varchar(80) cols and a smalldatetime with GETDATE as default value. The last one is just for auditory, but necesary.
I'd like to know the best (fastest) way to INSERT the rows. I've been reading about BULK INSERT. But if posible I'd like to avoid it because the app does not run on the same server where database is hosted and I'd like to keep them as isolated as posible.
Thanks!
Diego
Another option would be bcp.
Alternatively, if you're using .NET you can use the SqlBulkCopy class to bulk insert data. This is something I've blogged about on the performance of, which you may be interested in as I compared SqlBulkCopy vs another way of bulk loading data to SQL Server from .NET (using SqlDataAdapter). Basic example loading 100,000 rows took 0.8229s using SqlBulkCopy vs. 25.0729s using the SqlDataAdapter approach.
Create an SSIS package that will copy the file to SQL server machine and then use the data flow task to import data from file to SQL server database.
There is no faster/more efficient way than BULK INSERT and when you're dealing with such large ammount of data, do not even think about anything from .NET, because thanks to GC, managing millions of object in memory causes massive performance degradation.

Move Data from Oracle to SQL Server

I would like to copy parts of an Oracle DB to a SQL Server DB. I need to move the data because the Oracle box is being decommissioned. I only need the data for reference purposes so don't need indexes or stored procedures or contstaints, etc. All I need is the data.
I have a link to the Oracle DB in SQL Server. I have tested the following query, which seemed to work just fine:
select
*
into
NewTableName
from
linkedserver.OracleTable
I was wondering if there are any potential issues with using this approach?
Using SSIS (sql integration services) may be a good alternative especially if your table names are the same on both servers. Use the import wizard via and it should create the destination tables for you and let you edit any mappings.
The only issue I see with that is you will need to execute that of course for each and every table you need. Glad you are decommissioning the oracle server :-). Otherwise if you are not concerned with indexes or any of the existing sprocs I don't see any issue in what you are doing.
The "select " approach could be very slow if tables are large. Consider writing pro*C in that case or use Fastreader http://www.wisdomforce.com/products-FastReader.html
A faster and easier approach might be to use the Data Transformation Services, depending on the number of objects you're trying to copy over.