Text datatype in SQL got trimmed to 8000 characters - sql

In SQL I have a table with around 2000 rows and i store XML's in a column. The data type of the column is text. Around 80 rows are currently trimmed off to exact 8000 characters. There are values in that column which has more than 8000 characters.what could be the issue.

8000 characters is actually the maximum amount of characters that SSMS will display (by default), it will return to your front end the full amount - at least in the case of varchar(max). I think that is the case with text as well, but I don't use text type anymore :P
It should be noted, that you shouldn't be using the text data type for sql server, as it is in the process of deprecation.
See: https://msdn.microsoft.com/en-us/library/ms143729.aspx for information on texts deprecation.

Related

SQL Server NText field limited to 43,679 characters?

I working with SQL Server data base in order to store very long Unicode string. The field is from type 'ntext', which theoretically should be limit to 2^30 Unicode characters.
From MSDN documentation:
ntext
Variable-length Unicode data with a maximum string length of 2^30 - 1 (1,073,741,823) bytes. Storage size, in bytes, is two times the string length that is entered. The ISO synonym for ntext is national
text.
I'm made this test:
Generate 50,000 characters string.
Run an Update SQL statement
UPDATE [table]
SET Response='... 50,000 character string...'
WHERE ID='593BCBC0-EC1E-4850-93B0-3A9A9EB83123'
Check the result - what actually stored in the field at the end.
The result was that the field [Response] contain only 43,679 characters. All the characters at the end of the string was thrown out.
Why this happens? How I can fix this?
If this is really the capacity limit of this data type (ntext), which another data type can store longer Unicode string?
Based on what I've seen, you may just only be able to copy 43679 characters. It is storing all the characters, they're in the db(check this with Select Len(Reponse) From [table] Where... to verify this), and SSMS has problem copying more than when you go to look at the full data.
NTEXT datatype is deprecated and you should use NVARCHAR(MAX).
I see two possible explanations:
Your ODBC driver you use to connect to database truncate parameter value when it is too long (try using SSMS)
You write you generate your input string. I suspect you generate CHAR(0) which is Null literal
If second is your case make sure you cannot generate \0 char.
EDIT:
I don't know how you check the length but keep in mind that LEN does not count trailing whitespaces
SELECT LEN('aa ') AS length -- 2
,DATALENGTH('aa ') AS datalength -- 7
Last possible solution I see you do sth like:
SELECT 'aa aaaa'
-- result in SSMS `aa aaaa`: so when you count you lose all multiple whitespaces
Check query below if returns 100k:
SELECT DATALENGTH(ntext_column)
For all bytes; Grid result on right click and click save result to file.
Can confirm. The actual limit is 43679. Had a problem with a subscription service for a week now. Every data looked good, but it still gave us an error that one of the fields have invalid values, even tho, it got correct values in. It turned out that the parameters was stored in NText and it maxed out at 43679 characters. And because we cannot change the database design, we had to make 2 different subscriptions for the same thing and put half of the entities to the other one.

Determine field's bytes in Redshift

I am moving a table from SQL Server to Redshift. I've exported the data and gotten it into a UTF-8 text file. When trying to load to Redshift, the COPY command fails, complaining the data exceeds the width of the field.
The destination Redshift table schema matches that of the source SQL Server table (i.e. varchar field widths are the same). If I understand correctly, Redshift's varchar size is in bytes, not characters, like SQL Server. So, multi-byte characters are causing the "too wide" problem.
I'd like to run a query to determine how big to make my varchar fields, but there doesn't seem to be a function that returns the number of bytes a string requires, only the number of characters in that string.
How have others solved this problem?
Field length and as consequence fields types might be critical in Redshift. Load sample data into RedShift table with maximum fields sizes. Sample has to be as big as possible. Than you will be be able to calculate real field sizes with disregard of the definitions in MSSQL Server, that might be much bigger than you really need.

SQL large comment field

I have been asked to code an SQL table that has a comment field that:
must be able to store at least several pages of text
What data type should I use, and what value equates to several pages of text?
In MS SQL varchar(max) stores a maximum of 2 147 483 647 characters
Depending on the Database you are using, there are Text or CLOB (character large object), which you should use for the column with the several pages of text.
The reason for this is, that the content of these columns are stored somewhere else in the database system (out-of-line) and don't decrease performance on other operations on this table (like e.g. aggregation of some statistaical data).

text or varchar?

I have 2 columns containing text one will be max 150 chars long and the other max 700 chars long,
My question is, should I use for both varchar types or should I use text for the 700 chars long column ? why ?
Thanks,
The varchar data type in MySQL < 5.0.3 cannot hold data longer than 255 characters. While in MySQL >= 5.0.3 it has a maximum of 65,535 characters.
So, it depends on the platform you're targeting, and your deployability requirements. If you want to be sure that it will work on MySQL versions less than 5.0.3, go with a text type data field for your longer column
http://dev.mysql.com/doc/refman/5.0/en/char.html
An important consideration is that with varchar the database stores the data directly in the table, and with text the database stores a pointer to a separate tablespace in which the data is stored. So, unless you run into the limit of a row length (64K in MySQL, 4-32K in DB2, 8K in SQL Server 2000) I would normally use varchar.
Not sure about mysql specifically, but in MS SQL you definitely should use a VARCHAR for anything under 8000 characters long, if you want to be able to run any sort of comparison on the value in the field. For example this would be possible with a VARCHAR:
select your_column from your_table
where your_column like '%dogs%'
but not with a TEXT field.
More information regarding TEXT field in mysql 5.4 can be found here and more information about the VARCHAR field can be found here.
I'd pick the option based on the type of data being entered. For example, if it's a (potentially) super long username and a biography, I'd do varchar and text. Sure, you're limiting the bio to 700 chars, but what if you add HTML formatting down the road, keeping the 700 char limit but allowing HTML tags for formatting?
Twitter may use text for their tweets, since it could be quicker to add meta data to the posts (e.g. url href's and #user PK's) to cache the additional data instead of caluclate it every rendering.

Creating a new text column in a SQL Server Table: which type should I choose?

I have a text field that can be 30 characters in length, however it can achieve around 2.000 characters.
For "future safety", it would be nice if this new column could support until 3.000 characters. I think the average size of this field is 250 characters.
Which SQL Server 2000 data type is better to increase performance and disk space?
Do you need unicode or nonunicode data? Use either varchar(3000) or nvarchar (3000). Don't use text or ntext as they become problematic to query or update.
To design for average length makes no sense.
If your field is going to hold up to 3000 characters, you should probably make sure your datatype will allow for something that big. Design based upon your maximum, not your average.
This sounds like the perfect case for VarChar. Are you doing any indexing on this field?