Skip Columns During Teradata Table Import From CSV Using SQL Assistant - sql

I have a CSV file with data I need to import to a Teradata table, but it has a useless column that I would like to exclude from the import. The useless column is the first column, so the CSV rows are set up like:
'UselessData','Data','Data','Data'
Typically, I would import using SQL Assistant by choosing File -> Import Data from the menu and using the basic query:
INSERT INTO TableName VALUES (?,?,?,?)
But this will collect the extraneous useless data in Column 1. Is there a way to specify that an import take only certain columns or send the useless column to NULL?

AFAIK you can't do that with SQL Assistant.
Possible workarounds:
Switch to Teradata Studio or TPT for loading (will also load faster)
Load all columns into a Volatile Table first (and don't forget to increase the Maximum Batch size for simple Imports in Tools -> Options -> Import) and then Insert/Select into the target.

Related

Query contains parameters but import file contains different values [importing csv to Teradata SQL]

I am using Teradata SQL to import a CSV file. I clicked import to activate the import operation, then typed the following
insert into databasename.tablename values(?,?,?,...)
I made sure to specify the database name as well as what I want the table to be named, and I put 13 commas--the number of columns in my CSV file.
It gives me the following error:
Query contains 13 parameters but Import file contains 1 data values
I have no idea what the issue is.
The default delimiter used by your SQL Assistant doesn't match the one used in the CSV, so it doesn't recognise all the columns.
On SQL Assistant, go to : Tools >> Options >> Export/Import and choose the proper delimiter so it matches the one in your CSV.

Should I use SSIS or the SQL Server Import Export tool for a large bulk insert operation?

I will soon need to import millions of records into into a single SQL Server Database table which we use in production. The data to import will be available in the form of about 40 csv files, each having hundreds of thousands of records.
For each row, some of the column values are supplied by the csv files, whereas other rows will require values that I must specify.
I am trying to determine which tool to use. I noticed that SQL Server Management Studio comes with the Import Export Wizard. Is that tool advisable for this type of job? Or should I use SSIS instead?
Some other questions I have:
Should I "lock" the table during the operation?
Should I perform the insert into a copy of the production table and
then once the operation is validated, should I make the copy the
official version of the production table?
As you are having some logic to handle for the rows from CSV (some rows, you will insert and some rows require you to supply some values), you cannot have these kinds of logic in the Import Export Wizard. It is straightforward load. So, you have to go for SSIS only.
You need to have conditional branching to split the rows and supply values to the target table.
For the second question, If possible, I would suggest you to load to separate table and then rename them later. That way, production system users are not impacted by this loading.

Import Oracle data dump and overwrite existing data

I have an oracle dmp file and I need to import data into a table.
The data in the dump contains new rows and few updated rows.
I am using import command and IGNORE=Y, so it imports all the new rows well. But it doesn't import/overwrite the existing rows (it shows a warning of unique key constraint violated).
Is there some option to make the import UPDATE the existing rows with new data?
No. If you were using data pump then you could use the TABLE_EXISTS_ACTION=TRUNCATE option to remove all existing rows and import everything from the dump file, but as you want to update existing rows and leave any rows not in the new file alone - i.e. not delete them (I think, since you only mention updating, though that isn't clear) - that might not be appropriate. And as your dump file is from the old exp tool rather than expdp that's moot anyway, unless you can re-export the data.
If you do want to delete existing rows that are not in the dump then you could truncate all the affected tables before importing. But that would be a separate step that you'd have to perform yourself, its not something imp will do for you; and the tables would be empty for a while, so you'd have to have downtime to do it.
Alternatively you could import into new staging tables - in a different schema sinceimp doesn't support renaming either - and then use those to merge the new data into the real tables. That may be the least disruptive approach. You'd still have to design and write all the merge statements though. There's no built-in way to do this automatically.
You can import into temp table and then do record recon by joining with it.
Use impdp option REMAP_TABLE to load existing file into temp table.
impdp .... REMAP_TABLE=TMP_TABLE_NAME
when load is done run MERGE statement on existing table from temp table.

Oracle SQL Dump file extracting parts to sql/another dump file

I have a Oracle DB dump file and now I only need parts of the tables that are included there. Does anyone know how I can extract this parts into a separate dump file (or SQL)?
I thought about using the import statement. Import from dump file (full export) to dumpfile (needed parts) something like this, but don't know if its possible this way
import user/pw directory=fullexport_dump dumpfile=part.dmp logfile=import.log status=30
No it's not possible. You can only limit rows while exporting using query parameter.
exp ..... query="where id=10"
You may search further in the Oracle Documentation.
So, import the whole table, and create a new table with only required parts:
create table NEEDEDPARTS as select * from FULLEXPORT where id=10
Or, import the whole table and re-export with query parameter.

How to import pipe delimited text file data to SQLServer table

I have database table represented as text file in the following pattern:
0|ALGERIA|0| haggle. carefully f|
1|ARGENTINA|1|al foxes promise|
2|BRAZIL|1|y alongside of the pendal |
3|CANADA|1|eas hang ironic, silent packages. |
I need to import this data to a SQL Server 2008 database table. I have created the table with the types matching the schema.
How to import this data to the table?
EDIT: Solved by following the answer selected.
Note to anyone stumbling upon this in future: The datatype needs to be converted.
Refer: http://social.msdn.microsoft.com/Forums/en/sqlintegrationservices/thread/94399ff2-616c-44d5-972d-ca8623c8014e
You could use the Import Data feature by right mouse clicking the database, and then clicking Tasks then Import Data. This will give you a wizard which you can specify the delimiters etc. for your file and preview the output before you've inserted any data.
If you have a large amount of data you can use bcp to bulk import from file: http://msdn.microsoft.com/en-us/library/ms162802.aspx
The bcp utility bulk copies data
between an instance of Microsoft SQL
Server and a data file in a
user-specified format. The bcp utility
can be used to import large numbers of
new rows into SQL Server tables...
Except when used with the
queryout option, the utility requires
no knowledge of Transact-SQL. To
import data into a table, you must
either use a format file created for
that table or understand the structure
of the table and the types of data
that are valid for its columns.