Exporting data to csv from SQL Server - sql

I have a SQL Server table that contains columns of type varchar(MAX) and I'm trying to get a .csv (UTF-8) type export but I get this error instead.
I want to know how I can change this data type to nvarchar(MAX) without losing data. Can anyone help?
Messages
Error 0xc00470d4: Data Flow Task 1: The code page on Destination - test_csv.Inputs[Flat File Destination Input].Columns[test1] is 1252 and is required to be 65001.
(SQL Server Import and Export Wizard)
Error 0xc00470d4: Data Flow Task 1: The code page on Destination - test_csv.Inputs[Flat File Destination Input].Columns[test2] is 1252 and is required to be 65001.
(SQL Server Import and Export Wizard)
(many more of the same error - for each column)
Error 0xc004700c: Data Flow Task 1: One or more component failed validation.
(SQL Server Import and Export Wizard)
Error 0xc0024107: Data Flow Task 1: There were errors during task validation.
(SQL Server Import and Export Wizard)
I have looked into similar questions, where I have the same problem but I don't know how to use CAST or CONVERT in the query

You can solve this in 3 ways:
CAST your source columns to NVARCHAR in the initial SELECT, e.g. CAST(mycolumn as nvarchar(column_length))
Use a SSIS Data Conversion task to convert the strings to Unicode
Set Flat File Connection Manager's code page to '1252 ANSI - Latin I'
(65001 Code page = Unicode (UTF-8))
You might find that the 3rd option is easiest.

Related

SQL Import issues with large numeric values

I am facing issues when importing large data values into a pre-defined data table in SQL. Any value larger or equal to 100,000,000 (one hundred million) creates an error and fails upon import. A value of 99,999,999 is fine. Does anyone know what is going on and how to solve this, please?
Notes:
The data table I import data to is pre-defined; I can only append rows. The target column is defined as float(Null). The import wizard seems to set the data type to float automatically. I could not change it to numeric or decimal because a) I do not have the admin rights, and b) there is something funny with the source data when trying to import. But I cannot see anything to be off when looking at it in Excel; all values are numeric and the max length is only 11.
The files I use are in .csv format and many have more than 1 million rows. I tried splitting them up using Python and saving them in .xlsx format to put them through an Excel formatting step, but that did not help.
I am using the import wizard, SMSS v.18.12.1, and the SQL Server Native Client 11.0 of the import wizard.
The error message I get is:
`
Error 0xc02020c5: Data Flow Task 1: Data conversion failed while converting column "ColumnName" (42) to column "ColumnName" (148). The conversion returned status value 6 and status text "Conversion failed because the data value overflowed the specified type.".
(SQL Server Import and Export Wizard)
and
Error 0xc0209029: Data Flow Task 1: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "Data Conversion 0 - 0.Outputs[Data Conversion Output].Columns[ColumnName]" failed because error code 0xC0209082 occurred, and the error row disposition on "Data Conversion 0 - 0.Outputs[Data Conversion Output].Columns[ColumnName]" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
(SQL Server Import and Export Wizard)
`

Acess 2007 Attachment type to SQL server

My predecessor created an Access database in 2007, which has a number of tables, forms and query.
One of the tables holds images, which are stored as attachments.
The database is almost 2GB in size (compacting and repairing doesn't change it). So I'd like to convert it to a SQL Server 2014 database.
I've used the Data Import tool, which has copied the tables & data into SQL Server. But the attachments are just converted to a string of their file name (image1.jpg)
Is there anyway to import the images from the MS Access database in to the SQL Server?
I also have a Sharepoint 2010 server, would this be a better option?
Update (sorry new to Stack Overflow, not sure what to put where)
I've changed the field mapping in import wizard to IMAGE, and I get the following error. If I set it to ignore, the fields are blank. using VARSTRING, puts either Null or 0x in the field.
Executing (Error) Messages Error 0xc02020c5: Data Flow Task 1: Data conversion failed while converting column "Photos" (23) to column
"Photos" (57). The conversion returned status value 2 and status text
"The value could not be converted because of a potential loss of
data.". (SQL Server Import and Export Wizard)
Error 0xc0209029: Data Flow Task 1: SSIS Error Code
DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "Data Conversion 0 -
0.Outputs[Data Conversion Output].Columns[Photos]" failed because error code 0xC020907F occurred, and the error row disposition on "Data
Conversion 0 - 0.Outputs[Data Conversion Output].Columns[Photos]"
specifies failure on error. An error occurred on the specified object
of the specified component. There may be error messages posted before
this with more information about the failure. (SQL Server Import and
Export Wizard)
Error 0xc0047022: Data Flow Task 1: SSIS Error Code
DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Data
Conversion 0 - 0" (49) failed with error code 0xC0209029 while
processing input "Data Conversion Input" (50). The identified
component returned an error from the ProcessInput method. The error is
specific to the component, but the error is fatal and will cause the
Data Flow task to stop running. There may be error messages posted
before this with more information about the failure. (SQL Server
Import and Export Wizard) Error 0xc02020c4: Data Flow Task 1: The
attempt to add a row to the Data Flow task buffer failed with error
code 0xC0047020. (SQL Server Import and Export Wizard) Error
0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.
The PrimeOutput method on Source - Attachments returned error code
0xC02020C4. The component returned a failure code when the pipeline
engine called PrimeOutput(). The meaning of the failure code is
defined by the component, but the error is fatal and the pipeline
stopped executing. There may be error messages posted before this
with more information about the failure. (SQL Server Import and
Export Wizard)
You've used SQL Server to import the data which is a good thing. I would probably use data type IMAGE for the column to hold the images. You could also try the data typoe VARBINARY(MAX).
Use the Import and Export WIzard to import the data en see which datatype works.
Hope this helps you out.
Edit:
You say when you convert the attachment table you only have the names of the images. Are these names related to the images. When you still have the images you can store them in a folder and use FILESTREAM to connect to thme using SQL Server.
Using FILESTREAM enhances the database because the data of the images is no longer saved in the database, only a reference to them.
Unfortunately SQL Server doesn't support multi-valued or attachment like fields.
For more information about FILESTREAM look at this website: http://msdn.microsoft.com/library/hh461480

Issue exporting from SQL Server 2012 to PostgreSQL

I am trying to export a SQL Server 2012 database into a PostgreSQL 9.3 but am getting a weird conversion error for float columns. If the import ignores the float columns, everything goes well (about 300k rows), otherwise I get the following:
- Copy in "TABLE_TO_COPY" (Error)
Messagges
Error 0xc020844b: Data Flow Task 1: Exception during data insertion. The message returned by the provider is: Unable to cast object of type 'System.Double' to type 'System.Char[]'.
(SQL Server Import and Export Wizard)
Errore 0xc0047022: Data Flow Task 1: Error Code SSIS DTS_E_PROCESSINPUTFAILED. Error in method ProcessInput in component "Destination - TABLE_TO_COPY" (94) with code 0xC020844B during the processing of the input "Destination Input" (97). The specified component returned an error from the method ProcessInput. [...]
Error 0xc02020c4: Data Flow Task 1: Attempt to add a row to the buffer of activity Data flow errored with code 0xC0047020. [...]
(SQL Server Import and Export Wizard)
Error 0xc0047038: Data Flow Task 1: Error Code SSIS DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on Origin - PATIENT_DIALYSIS_SYMPTOM returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
(SQL Server Import and Export Wizard)
It's not a memory issue, since the server settings have the highest possible value. What could cause this?
The issue is only present in SQL Server Management Studio's Import/Export Wizard. By resorting to the SSIS, the issue is not there.
This is the configuration I have set up for my Data Flow Task:
Source: OLE DB pointing to my SQL Server database table
Destination: OCDB connection pointing to my PostgreSQL table
It doesn't look like SSIS is able to create source tables into the origin DB, so I had to do that before setting up the project.

Errors in SQL Server while importing CSV file despite varchar(MAX) being used for each column

I'm trying to insert a large CSV file (several gigs) into SQL Server, but once I go through the Import Wizard and finally try to import the file I get the following error report:
Executing (Error)
Messages
Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data
conversion for column ""Title"" returned status value 4 and status
text "Text was truncated or one or more characters had no match in the
target code page.".
(SQL Server Import and Export Wizard)
Error 0xc020902a: Data Flow Task 1: The "Source -
Train_csv.Outputs[Flat File Source Output].Columns["Title"]" failed
because truncation occurred, and the truncation row disposition on
"Source - Train_csv.Outputs[Flat File Source Output].Columns["Title"]"
specifies failure on truncation. A truncation error occurred on the
specified object of the specified component.
(SQL Server Import and Export Wizard)
Error 0xc0202092: Data Flow Task 1: An error occurred while processing
file "C:\Train.csv" on data row 2.
(SQL Server Import and Export Wizard)
Error 0xc0047038: Data Flow Task 1: SSIS Error Code
DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on Source - Train_csv
returned error code 0xC0202092. The component returned a failure code
when the pipeline engine called PrimeOutput(). The meaning of the
failure code is defined by the component, but the error is fatal and
the pipeline stopped executing. There may be error messages posted
before this with more information about the failure.
(SQL Server Import and Export Wizard)
I created the table to insert the file into first, and I set each column to hold varchar(MAX), so I don't understand how I can still have this truncation issue. What am I doing wrong?
In SQL Server Import and Export Wizard you can adjust the source data types in the Advanced tab (these become the data types of the output if creating a new table, but otherwise are just used for handling the source data).
The data types are annoyingly different than those in MS SQL, instead of VARCHAR(255) it's DT_STR and the output column width can be set to 255. For VARCHAR(MAX) it's DT_TEXT.
So, on the Data Source selection, in the Advanced tab, change the data type of any offending columns from DT_STR to DT_TEXT (You can select multiple columns and change them all at once).
This answer may not apply universally, but it fixed the occurrence of this error I was encountering when importing a small text file. The flat file provider was importing based on fixed 50-character text columns in the source, which was incorrect. No amount of remapping the destination columns affected the issue.
To solve the issue, in the "Choose a Data Source" for the flat-file provider, after selecting the file, a "Suggest Types.." button appears beneath the input column list. After hitting this button, even if no changes were made to the enusing dialog, the Flat File provider then re-queried the source .csv file and then correctly determined the lengths of the fields in the source file.
Once this was done, the import proceeded with no further issues.
I think its a bug, please apply the workaround and then try again: http://support.microsoft.com/kb/281517.
Also, go into Advanced tab, and confirm if Target columns length is Varchar(max).
The Advanced Editor did not resolve my issue, instead I was forced to edit dtsx-file through notepad (or your favorite text/xml editor) and manually replace values in attributes to
length="0" dataType="nText" (I'm using unicode)
Always make a backup of the dtsx-file before you edit in text/xml mode.
Running SQL Server 2008 R2
Goto Advanced tab----> data type of column---> Here change data type from DT_STR to DT_TEXT and column width 255. Now you can check it will work perfectly.
Issue:
The Jet OLE DB provider reads a registry key to determine how many rows are to be read to guess the type of the source column.
By default, the value for this key is 8. Hence, the provider scans the first 8 rows of the source data to determine the data types for the columns. If any field looks like text and the length of data is more than 255 characters, the column is typed as a memo field. So, if there is no data with a length greater than 255 characters in the first 8 rows of the source, Jet cannot accurately determine the nature of the data type.
As the first 8 row length of data in the exported sheet is less than 255 its considering the source length as VARCHAR(255) and unable to read data from the column having more length.
Fix:
The solution is just to sort the comment column in descending order.
In 2012 onwards we can update the values in Advance tab in the Import wizard.

Importing csv file to table

I'm trying to import a csv file to a table in sql server 2005 with the wizard.
but when i import the file is always give me this errors :
Executing (Error) Messages Error 0xc02020a1: Data Flow Task 1: Data
conversion failed. The data conversion for column "Column 15" returned
status value 4 and status text "Text was truncated or one or more
characters had no match in the target code page.". (SQL Server Import
and Export Wizard)
Error 0xc020902a: Data Flow Task 1: The "output column "Column 15"
(70)" failed because truncation occurred, and the truncation row
disposition on "output column "Column 15" (70)" specifies failure on
truncation. A truncation error occurred on the specified object of the
specified component. (SQL Server Import and Export Wizard)
Error 0xc0202092: Data Flow Task 1: An error occurred while processing
file "C:\PEP_ENTITIES_71.csv" on data row 1. (SQL Server Import and
Export Wizard)
Error 0xc0047038: Data Flow Task 1: SSIS Error Code
DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Source
- PEP_ENTITIES_71_csv" (1) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called
PrimeOutput(). The meaning of the failure code is defined by the
component, but the error is fatal and the pipeline stopped executing.
There may be error messages posted before this with more information
about the failure. (SQL Server Import and Export Wizard)
What i'm doing wrong? This file was a export from a query, then i delete all records and try to import.
The flat file import task will default lengths to VARCHAR(50). You need to go to the Advanced property of the flat file connection page and change the length manually to match the destination table. There should also be an option to Suggest Types which may match the metadata that you're using, but it does a sampling of rows from the file so it may not be as accurate as just setting types manually.
It may be that you're trying to import data from the CSV which is too large for the field you're importing it in to. Perhaps you need to increase the size of your fields in your SQL table?
Have you tried to import a single, very small line of data from the CSV to see if that works? If it does, excessively large data somewhere in the rest of the sheet may be the problem.