SQL Server 2008 - Column defined at 600 varchar only captures 255 characters - sql

Why is the length for my longcomment field (defined as varchar(600)) only up to 255 coming from an Excel spreadsheet? The length of the value in the spreadsheet is over 300 characters and the field in the table is defined at varchar(600). However, when I select on that field in SQL, it is truncating at 255.
Thanks!

When an Excel files is parsed for import, only a certain number of rows are scanned to determine column size of the input. How you are importing the data makes a difference on what you need to change, basically you either need to override the detected column size or increase the number of rows scanned. Leave a comment with what import method you are using if you need additional help.

In Microsoft SQL Server Management Studio, you can change the size by going to the Tools/Option menu and opening the Query Results branch on the tree control. Then under the Results to Text leaf is the the “Maximum number of characters in a column” value.

Related

Why do SQL varchars (256) not get populated to my flat file in SSIS package? [duplicate]

This question already has an answer here:
Failing to read String value from an excel column
(1 answer)
Closed 3 years ago.
I have a SSIS package which sources from an Excel file, performs a lookup in SQL, and then writes the fields from the lookup to a flat file. For some reason, any of the fields in the SQL table that are of data type varchar 256 are not getting written. They are coming in as nulls. My other fields, including varchar 255, are coming across fine. I have tried flat file and Excel as destination with no luck.
I've tried converting the varchar with a data conversion to both 256 and to a Unicode string and no luck.
Even when I preview a simple query in the source component (ex: select lastname from xyz), the preview shows the lastname as null. It doesnt show other fields that have different data types as nulls.
This is usually a case when the excel driver only reads the first 8 rows of data and misinterprets the correct data type because of the lack of data it's checking. Here are some of the known issues from the Microsoft site: Reference
Issues with importing
Empty rows
When you specify a worksheet or a named range as the source, the driver reads the contiguous block of cells starting with the first non-empty cell in the upper-left corner of the worksheet or range. As a result, your data doesn't have to start in row 1, but you can't have empty rows in the source data. For example, you can't have an empty row between the column headers and the data rows, or a title followed by empty rows at the top of the worksheet.
If there are empty rows above your data, you can't query the data as a worksheet. In Excel, you have to select your range of data and assign a name to the range, and then query the named range instead of the worksheet.
Missing values
The Excel driver reads a certain number of rows (by default, eight rows) in the specified source to guess at the data type of each column. When a column appears to contain mixed data types, especially numeric data mixed with text data, the driver decides in favor of the majority data type, and returns null values for cells that contain data of the other type. (In a tie, the numeric type wins.) Most cell formatting options in the Excel worksheet do not seem to affect this data type determination.
You can modify this behavior of the Excel driver by specifying Import Mode to import all values as text. To specify Import Mode, add IMEX=1 to the value of Extended Properties in the connection string of the Excel connection manager in the Properties window.
Truncated text
When the driver determines that an Excel column contains text data, the driver selects the data type (string or memo) based on the longest value that it samples. If the driver does not discover any values longer than 255 characters in the rows that it samples, it treats the column as a 255-character string column instead of a memo column. Therefore, values longer than 255 characters may be truncated.
To import data from a memo column without truncation, you have two options:
Make sure that the memo column in at least one of the sampled rows contains a value longer than 255 characters
Increase the number of rows sampled by the driver to include such a row. You can increase the number of rows sampled by increasing the value of TypeGuessRows under the following registry key:
Redistributable components version - Registry key
Excel 2016 - HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Office\16.0\Access Connectivity Engine\Engines\Excel
Excel 2010 - HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Office\14.0\Access Connectivity Engine\Engines\Excel

Convert number format in SSIS

Need some help please ..
I have excel data that is formatted. Example 33.257615111 shows as 33.258 (thirty three million two hundred and fifty eight thousand)
I am trying to import data file into SQL using SSIS ETL. Is there a way I can convert this during ETL process as a number with 2 decimal places? i.e. 33257615.11
Thank you
This can be done. Use a Derived Column and the ROUND() expression.
Inside your Dataflow, between the Excel File Source component and the OLE DB Destination component, add a Derived Column component. Inside that Derived Column component create a new column that rounds the value.
(DT_NUMERIC,25,2)ROUND(MyColumnName, 2)
Map this new Derived Column above to your SQL destination and we are all set.
But! Are you SURE you want to store rounded numbers in your database?
Will you ever need to sum this column? If so, the rounded values are going to yield a very different total than the non-rounded values. Maybe, don't do this rounding in SSIS and instead store the full numbers. Let applications reading this data do the formatting. It's just as easy.
SELECT ROUND(MyColumnName,2) AS MyColumnName
FROM MyTable
In multitier architecture, the formatting of numbers should be left up to the display layer, not the data layer. This is exactly what Excel is doing. It's showing you a rounded number, but behind the scenes is the full value. Don't drop precision for the sake of formatting.

Excel to SQL table field value appending with 0

I loaded an Excel file into an SQL table. The Excel file, one field consists of VARCHAR data (of data type general). When loaded into an SQL table, some of these values are prefixed with zero.
Example: in the Excel file it is 1081999 the same value become 01081999 in the SQL table.
What might be the reason for this ?
Excel will hide leading 0's as it identifies the fields content as a number and displays it as such. I would assume that the excel worksheet does indeed contain these leading 0's and they are simply not shown by Excel. If you change the type of the column from General to Text do they show up??
As a side note, if these are indeed numbers you should be storing them in a numeric datatype in the database...

Access 2013 form field value gets cut off on changing the number before the point

Recently I created a form which loads some records from an SQL (linked) database.
I want to display some field values (which are decimal numbers - 30,2 in the sql server).
The values are loaded in the form and displayed with a comma for the decimals and a point as a 1000 separator like this: 5.222,55. (Language settings on the computer).
Though the thing is, when I change the 5 before the point into any number then the value gets truncated and it sees the point as the decimal separator. So for example, if I only select the number 5 on the 5.222,55 number (so I leave the point) and then change it to a 2, the value is changed to 2,22.
Though when I select the whole number or the first number AND the point then it changes correctly. So how can I get this right? The easy way is to just select the whole number on changing but I just want it to work in every way. Perhaps I can achieve it with VBA? I tried setting the format option (back in access 2000 I believe I could set the text field to long integer or currency or something but I cant find this in the access 2013 field properties).
Additional information:
I am linking with a SQL server 2012,
The linked table in Access sees the fields record source (the SQL fields) as short text (while they are decimals in the SQL server)
Access cannot handle a 30,2 decimal, thus it is converted to text by teh ODBC driver.
So, either convert back and forth between text and numerics with Str and Val (the C* functions won't do), or change the data type of the field in SQL Server to, say, Money (= Currency in Access).

Changing the length of Text fields in an Access linked table

I am exporting a file from a system as .csv. My aim is to link to this file as a table (which matches the output field for field) and then run the queries and export.
The problem I am having is that, upon import, all the fields are 255 bytes wide rather than what they need to be.
Here's what I've tried so far:
I've looked at ALTER TABLE but I cannot run multiple ALTER TABLE statements in one macro.
I've also tried appending the table into another table with the correct structure but it seems to overwrite the structure.
I've also tried using the Left function with the appropriate field length, but when I try to export, I pretty much just see 5 bytes per column.
What I would like is a suggestion as to what is the best path to take given my situation. I am not able to amend the initial .csv export, and I would like to avoid VBA if possible, as I am not at all familiar with it.
You don't really need to worry about the size of Text fields in an Access linked table that is connected to a CSV file. Access simply assigns each Text field the largest possible maximum size: 255. It does not mean that every value is actually 255 characters long, it just means that any values in those fields can be at most 255 characters long.
Even if you could change the structure of the linked table (which you can't), it wouldn't make any real difference except to possibly truncate longer Text values, and you could easily do that with a String function. For example, if a particular field had to be restricted to 15 characters then you could simply use Left([fieldName], 15) as a query column or as the control source in a report.
In the end, as the data set is not that large, I have set this up to append from my source data into a table with the correct structure. I can now run my processes against this table as per normal.