Postgres max field length issue - postgres-9.4

We are trying to load some data in postgres
The problem is data is huge that its exceeding character varying(10485760) limit.
Is there any possible way to increase this limit?
Thanks,

Use TEXT data type. It will allow you to enter long data.
Use ALTER ... command for column data type.

Related

sql server data length [duplicate]

What is the best way to store a large amount of text in a table in SQL server?
Is varchar(max) reliable?
In SQL 2005 and higher, VARCHAR(MAX) is indeed the preferred method. The TEXT type is still available, but primarily for backward compatibility with SQL 2000 and lower.
I like using VARCHAR(MAX) (or actually NVARCHAR) because it works like a standard VARCHAR field. Since it's introduction, I use it rather than TEXT fields whenever possible.
Varchar(max) is available only in SQL 2005 or later. This will store up to 2GB and can be treated as a regular varchar. Before SQL 2005, use the "text" type.
According to the text found here, varbinary(max) is the way to go. You'll be able to store approximately 2GB of data.
Split the text into chunks that your database can actually handle. And, put the split up text in another table. Use the id from the text_chunk table as text_chunk_id in your original table. You might want another column in your table to keep text that fits within your largest text data type.
CREATE TABLE text_chunk (
id NUMBER,
chunk_sequence NUMBER,
text BIGTEXT)
In a BLOB
BLOBs are very large variable binary or character data, typically documents (.txt, .doc) and pictures (.jpeg, .gif, .bmp), which can be stored in a database. In SQL Server, BLOBs can be text, ntext, or image data type, you can use the text type
text
Variable-length non-Unicode data, stored in the code page of the server, with a maximum length of 231 - 1 (2,147,483,647) characters.
Depending on your situation, a design alternative to consider is saving them as .txt file to server and save the file path to your database.
Use nvarchar(max) to store the whole chat conversation thread in a single record. Each individual text message (or block) is identified in the content text by inserting markers.
Example:
{{UserId: Date and time}}<Chat Text>.
On display time UI should be intelligent enough to understand this markers and display it correctly. This way one record should suffice for a single conversation as long as size limit is not reached.

How to insert data larger than maximum capacity of specific column without changing its datatype in Oracle 11g

I have large data (more than 4000 characters) and I have a column of type VARCHAR2(4000) in Oracle 11g.
Is there any way to insert that data in this column without changing its data type?
If you are referring to a variable defined in a PL/SQL package, function, or procedure, then the maximum length of a VARCHAR2 variable is 32k. If the value must be persisted then you have to decide if you want to keep the data contiguous. If you do, then you must change the column's datatype to CLOB. If it does not need to be contiguous, then simply create a child relation to store the pieces.
No, your data will get truncated, if it is more than the value you specified in the data type. The best way you can resolve this issue is by changing varchar2(4000) to varchar2(max) . MAX will allow you to insert data upto 32000 characters.

Determine field's bytes in Redshift

I am moving a table from SQL Server to Redshift. I've exported the data and gotten it into a UTF-8 text file. When trying to load to Redshift, the COPY command fails, complaining the data exceeds the width of the field.
The destination Redshift table schema matches that of the source SQL Server table (i.e. varchar field widths are the same). If I understand correctly, Redshift's varchar size is in bytes, not characters, like SQL Server. So, multi-byte characters are causing the "too wide" problem.
I'd like to run a query to determine how big to make my varchar fields, but there doesn't seem to be a function that returns the number of bytes a string requires, only the number of characters in that string.
How have others solved this problem?
Field length and as consequence fields types might be critical in Redshift. Load sample data into RedShift table with maximum fields sizes. Sample has to be as big as possible. Than you will be be able to calculate real field sizes with disregard of the definitions in MSSQL Server, that might be much bigger than you really need.

Changing column datatype from DECIMAL(9,0) to DECIMAL(15,0)

Can you please help me concerning this matter (I didnĀ“t found it in the Teradata documentation, which is honestly little overwhelming): My table had this column -BAN DECIMAL(9,0)-, and now I want to change it to - BAN DECIMAL(15,0) COMPRESS 0.- How can I do it? What does COMPRESS constraint 0. or any other mean anyway?
I hope this is possible, and I don`t have to create a new table and then copy the data form the old table. The table is very very big - when I do COUNT(*) form that table I get this error: 2616 numeric overflow occurred during computation
The syntax diagram for ALTER TABLE doesn't seem to support directly changing a column's data type. (Teradata SQL DDL Documentation). COMPRESS 0 compresses zeroes. Teradata supports a lot of different kinds of compression.
Numeric overflow here probably means you've exceeded the range of an integer. To make that part work, just try casting to a bigger data type. (You don't need to change the column's data type to do this.)
select cast(count(*) as bigint)
from table_name;
You asked three different questions:
You cannot change the data type of a column from DECIMAL(9,0) to DECIMAL(15,0). Your best bet would be to create a new column (NEW_BAN), assign values from your old column, drop the old column and rename NEW_BAN back to BAN).
COMPRESS 0 is not a constraint. It means that values of "zero" are compressed from the table, saving disk space.
Your COUNT(*) is returning that error becasue the table has more than 2,147,483,647 rows (the max value of an INTEGER). Cast the result as BIGINT (as shown by Catcall).
And I agree, the documentation can be overwhelming. But be patient and focus only on the SQL titles for your exact release. They really are well written.
You can not use ALTER TABLE to change the data type from DECIMAL(9,0) to DECIMAL(15,0) because it cross the byte boundary required to store the values in the table. For Teradata 13.10, see the Teradata manual for SQL Data Definition Language Detailed Topics pages 61-65 for more details on using ALTER TABLE to change column data types.

text or varchar?

I have 2 columns containing text one will be max 150 chars long and the other max 700 chars long,
My question is, should I use for both varchar types or should I use text for the 700 chars long column ? why ?
Thanks,
The varchar data type in MySQL < 5.0.3 cannot hold data longer than 255 characters. While in MySQL >= 5.0.3 it has a maximum of 65,535 characters.
So, it depends on the platform you're targeting, and your deployability requirements. If you want to be sure that it will work on MySQL versions less than 5.0.3, go with a text type data field for your longer column
http://dev.mysql.com/doc/refman/5.0/en/char.html
An important consideration is that with varchar the database stores the data directly in the table, and with text the database stores a pointer to a separate tablespace in which the data is stored. So, unless you run into the limit of a row length (64K in MySQL, 4-32K in DB2, 8K in SQL Server 2000) I would normally use varchar.
Not sure about mysql specifically, but in MS SQL you definitely should use a VARCHAR for anything under 8000 characters long, if you want to be able to run any sort of comparison on the value in the field. For example this would be possible with a VARCHAR:
select your_column from your_table
where your_column like '%dogs%'
but not with a TEXT field.
More information regarding TEXT field in mysql 5.4 can be found here and more information about the VARCHAR field can be found here.
I'd pick the option based on the type of data being entered. For example, if it's a (potentially) super long username and a biography, I'd do varchar and text. Sure, you're limiting the bio to 700 chars, but what if you add HTML formatting down the road, keeping the 700 char limit but allowing HTML tags for formatting?
Twitter may use text for their tweets, since it could be quicker to add meta data to the posts (e.g. url href's and #user PK's) to cache the additional data instead of caluclate it every rendering.