SQL Server Data Import / Export Wizard Truncation Error - sql

I'm trying to load a large file into a SQL Server table. I know that two of the columns are > 50 characters wide so on the 'Advanced' tab in the Import/Export Wizard, I specify the width as 115 and 75 respectively. I then run the rest of the job and get the following error:
Is there another place I need to let the Wizard know about the change in length?

Check your database. There is a chance that the table was created from your previous unsuccessful wizard run. It is recommended to delete the table and wizard should re-run successfully without warnings.

Try letting the import wizard create the new table for you. Then you can use regular SQL to move it to its permanent table.

Related

Import SQL records from file into sql DB and changing type

I created a .txt file using BULK that contains all the records of a sql DB. Now I need to import these records into a new DB. The problem is that I need to change the type of some fields from DOUBLE to BIGINT or the records won't be added to the new DB.
Please, wich functions and how do I have to use them?
Thanks
Importing into SQL Server, you can right-click on the database and select Tasks > Import Data. This will open up the Import/Export Wizard which is a slimmed down version of the same functionality in SSIS.
Once the Wizard is open, you can navigate the through the prompts and eventually get to a screen, Select Source Data, where you can click on the Edit Mappings button and modify the data types of the data being imported.

Exporting Data - IBM Data Studio DB2

I am trying to export a large query from a DB2 database to a text file on my desktop using IBM Data Studio and I can't seem to get anything to work. When I run the query and right click on the results tab->Export->All Results it only gives me the first 500 records. This table is going to be in the range of 15 MM records so I can't just change the display settings to allow it to display that many records. I cannot use the unload utility because it won't let me save the file to my desktop. Any ideas?
Probably, the limit in the SQL Results is activated.
You should click in the upper left triangle in the "SQL results" view, then "Preferences...".
However, this is not a good way to export data, because you should fetch them into Data Studio and then export them via the GUI. It is better to use the "export" command.
IN the IBM Data Studio:
right click the table/view and ---->Data-----> Extract and Browse destination to drop the file.

How can I carry a database created in dbDesigner to SQL Server

I created a small database in dbDesigner which includes 4 tables, and I want to add these tables with their relationships to a database on a SQL Server. How can I do this?
Best practice for this that I am aware of, is using Management Studio's functionality for this.
The following steps will produce a file containing an SQL script you can run on any server you want in order to import the schema (with or without the data).
Right click on you database.
Select Tasks > Generate scripts
Click Next
Choose Script entire database and all database objects
Select Save to file and Single file
If you want to export data as well, click on Advanced and change Types of data to script to Schema and data (default is schema only)
Click Next ... Next.
Run the generated file on the server you want to import the schema to.

SSIS - Any other solution apart from Script Task

Team,
My objective is to data load from Excel to Sql Tables using SSIS. However the excels are quite dynamic i.e. their column count could vary OR the order of existing columns may change. But the destination table will be the same...
So I was contemplating on few options like:
1) Using SQL Command in "Excel Source" - But unfortunately I have to keep "first row as header" setting as false(To resolve the issue of Excel Connection Mngr sensing the datatype as numeric based on first few records). So the querying based on header doesnt work here.
2) The other oprtion in my mind is Script Task and write C# code to read excel based on the columns I know. So in this case the order and insertion/deletion of new columns won't matter.
Suggest me whether Script Task is the only option available for me? Any other simple way to achieve the same in SSIS? Also if possible suggest me a reference for the same.
Thanks,
Justin Samuel.
If you need to automate the process, then I'd definitely go with a script component / OleDbDataAdapter combo (you can't use a streamreader because Excel is a proprietary format). If not, go with the import wizard.
If you try to use a connection manager based solution, it's going to fail when the file layout changes. With the script component / OleDbDataAdapter combo, you can add logic in to interpret the fields and standardize the record layout before loading. You can also create an error buffer and gracefully push error values to it with Try / Catch.
Here's some links on how to use the script component as a source in the data flow task:
http://microsoft-ssis.blogspot.com/2011/02/script-component-as-source-2.html
http://beyondrelational.com/modules/2/blogs/106/posts/11126/ssis-script-component-split-single-row-to-multiple-rows.aspx
This could be done easily using "Import and Export Data" tool available with SQL Server.
Step 1: Specify your Excel as source and your SQL Server DB as destination.
Step 2: Provide necessary mappings.
Step: 3 In the final screen, you can specify to "Save as SSIS Package" and to File System. A relevant dtsx SSIS package would be created for you.
After the SQL Server Import and Export Wizard has created the package and copied the data, you can use the SSIS Designer to open and change the saved package by adding tasks, transformations, and event-driven logic.
(Since it works based on Header, order should not matter. And if a particular column is missing, it should automatically take NULL for that)
Reference: http://msdn.microsoft.com/en-us/library/ms140052.aspx

SQL Server 2000, how to automate import data from excel

Say the source data comes in excel format, below is how I import the data.
Converting to csv format via MS Excel
Roughly find bad rows/columns by inspecting
backup the table that needs to be updated in SQL Query Analyzer
truncate the table (may need to drop foreign key constraint as well)
import data from the revised csv file in SQL Server Enterprise Manager
If there's an error like duplicate columns, I need to check the original csv and remove them
I was wondering how to make this procedure more effecient in every step? I have some idea but not complete.
For step 2&6, using scripts that can check automatically and print out all error row/column data. So it's easier to remove all errors once.
For step 3&5, is there any way to automatically update the table without manually go through the importing steps?
Could the community advise, please? Thanks.
I believe in SQL 2000 you still have DTS (Data Transformation Services) part of Enterprise Manager. Using that you should be able to create a workflow that does all of these steps in sequence. I believe it can actually natively import Excel as well. You can run everything from SQL queries to VBScript so there's pretty much nothing you can't do.
I used to use it for these kind of bucket brigade jobs all the time.