Terdata Column size error exceeding 64k byte limit - size

We have a requirement to increase the existing 3 columns from VARCHAR(4096) to VARCHAR(11000), We pull the data from oracle table using informatica
and load it to teradata table.
As per my testing, Informatica job was successful till i made all 3 columns as VARCHAR(10001) in informatica and teradata. ( This table has other columns also).
But after 10001, if i increase the size,I was getting error
ERROR] Type:(Teradata DBS Error), Error: (The available data bytes in the table's perm row has exceeded the 64k byte limit.)
I tried giving CLOB datatype in target table for one column,even that was failing.
Found one of the posts in stackoverflow which talks about splitting the table because of the size limitation in teradata16
Row size limitation in Teradata
Could you please let me know if there is any option to do it without splitting the table ?

Related

Connection Timeout Error while reading the table having more than 100 columns in Mosaic Decisions

I am reading a table via snowflake reader node having less number of columns/attributes(around 50-80),the table is getting read on the Mosaic decisions Canvas. But when the attributes of table increases (approx 385 columns),Mosaic reader node fails. As a workaround I tried using the where clause with 1=2,in that case it is pulling the structure of the Table. But when I am trying to read the records even by applying the limit (only 10 records) to the query, it is throwing connection timeout Error.
Even I faced similar issue while reading (approx. 300 columns) table and I managed it with the help of input parameters available in Mosaic. In your case you will have to change the copy field variable to 1=1 used in the query at run time.
Below steps can be referred to achieve this -
Create a parameter (e.g. copy_variable) that will contain the default value 2 for the copy field variable
In reader node, write the SQL with 1 = $(copy_variable) So while validating, it’s same as 1=2 condition and it should validate fine.
Once validated and schema is generated, update the default value of $(copy_variable) to 1 so that while running, you will still get all records.

DB2/400 SQL : "ALLOCATE" instruction

A DB2 / 400 SQL question: In the statement below, when creating a column with the "ALLOCATE"
statement, does this mean that the database engine creates the column with an initial size of 20 Mega?
Because the analysis of the system table indicates that the column with a column of 2G, 7M.
Does the size indicated in the "LENGHT" column correspond to the size allocated, or to the maximum size of the column ?
Db2 for IBM i stores table data in two parts.
The Fixed length row-buffer table space and an overflow space.
If a table just has fixed length columns, all the data is in the table space.
ALLOCATE(x) means that the Db allocates x bytes in the table space for the column and the rest is stored in the overflow space.
The default for varying types is allocate(0), so theoretically the entire varying value is store in the overflow space.
Reality is varchar(30) or smaller is stored in the fixed length table space for performance, unless you explicitly specify allocate(0).
The reason it matters, if a query access both the fixed length table space and the overflow space, then 2 I/Os are required to retrieve all the data.
IBM recommends using an allocate(x) where x is large enough to handle at least 80% of values you have.
As you can see for yourself, length is the maximum size for the column.
IBM ACS's schema tool, for one example, shows you the allocated length...
create table mytable (
mykey integer
, myclob clob(200000) allocate(2000)
);

Change data type in table due to Disk Space/Memory Error

Attempts at changing data type in Access have failed due to error:
"There isn't enough disk space or memory". Over 385,325 records exists in the table.
Attempts at the following links, among other StackOverFlow threads, have failed:
Can't change data type on MS Access 2007
Microsoft Access can't change the datatype. There isn't enough disk space or memory
The intention is to change data type for one column from "Text" to "Number". The aforementioned links cannot accommodate that either due to size or the desired data type fields.
Breaking out the table may not be an option due to the number of records.
Help on this would be appreciated.
I cannot tell for sure about MS Access, but for MS SQL one can avoid a table rebuild (requiring lots of time and space) by appending a new column that allows null- values at the rightmost end of the table, update the column using normal update queries and AFAIK even drop the old column and rename the new one. So in the end it's just the location of that column that has changed.
As for your 385,325 records (I'd expect that number to be correct) even if the table had 1000 columns with 500 unicode- characters each we'd end up with approximately 385,325*1000*500*2 ~ 385 GB of data. That should nowadays not be the maximum available - so:
if it's the disk space you're running out of, how about move the data to some other computer, change the DB there and move it back.
if the DB seems to be corrupted (and standard tools didn't help (make a copy)) it will most probably help to create a new table or database using table creation (better: create manually and append) queries.

Huge join query leads to max row size error

I'm executing a SQL query that joins 100+ tables together and I am running into the following error message:
Cannot create a row of size 8131 which is greater than the allowable
maximum row size of 8060.
Just would like to know what my options are at this point? Is this query impossible to execute? Are there any workarounds?
Appreciate any help
Thanks
Your problem is not the join, or the number of tables. It is the number and size of the fields in the SELECT. You are reaching a row size limit, not a row count limit.
Make sure you are not using any "*" in your SELECT, then eliminate any unused fields and trim/limit strings where possible.
From MSDN forum:
You're hitting SQL's row size limit which is 8060 bytes (eg an 8K page). Using normal data types you cannot have a row which uses more than 8060 bytes, and while you can use a varchar to allow the smaller bits of data to offset the larger ones, with 468 columns of data you're looking at an average column width of 17.2 bytes.
If you convert varchar(x) to varchar(max) issue will be resolved.
Please also refer: How SQL server stores of the size of the row is greater than 8060 bytes Difference Between varchar(max) and varchar(8000)

Is there a way around the 8k row length limit in SQL Server?

First off, I know that in general having large numbers of wide columns is a bad idea, but this is the format I'm constrained to.
I have an application that imports CSV files into a staging table before manipulating them and inserting/updating values in the database. The staging table is created on the fly and has a variable number of NVARCHAR colums into which the file is imported, plus two INT columns used as row IDs.
One particular file I have to import is about 450 columns wide. With the 24 byte pointer used in a large NVARCHAR column, this adds up to around 10k by my calculations, and I get the error Cannot create a row of size 11166 which is greater than the allowable maximum row size of 8060.
Is there a way around this or are my only choices modifying the importer to split the import or removing columns from the file?
You can use text/ntext which uses 16 bytes pointer. Whereas varchar/nvarchar uses 24bytes pointer.
NVARCHAR(max) or NTEXT can store the data more than 8kb but a record size can not be greater than 8kb till SQL Server 2012. If Data is not fitted in 8kb page size then the data of larger column is moved to another page and a 24 bytes(if data type is varchar/nvarchar) pointer is used to store as reference pointer in main column. if it is text/ntext data type then 16 bytes poiner is used.
For Details you can Visit at following links :
Work around SQL Server maximum columns limit 1024 and 8kb record size
or
http://msdn.microsoft.com/en-us/library/ms186939(v=sql.90).aspx
If you are using SQL Server 2005, 2008 or 2012, you should be able to use NVARCHAR(max) or NTEXT which would be larger than 8,000 characters. MAX will give you 2^31 - 1 characters:
http://msdn.microsoft.com/en-us/library/ms186939(v=sql.90).aspx
I agree that Varchar or nvarchar (Max) is a good solution and will probably work for you, but completeness I will suggest that you can also create more than one table with the two tables having a One-to-One relationship.