Huge join query leads to max row size error - sql

I'm executing a SQL query that joins 100+ tables together and I am running into the following error message:
Cannot create a row of size 8131 which is greater than the allowable
maximum row size of 8060.
Just would like to know what my options are at this point? Is this query impossible to execute? Are there any workarounds?
Appreciate any help
Thanks

Your problem is not the join, or the number of tables. It is the number and size of the fields in the SELECT. You are reaching a row size limit, not a row count limit.
Make sure you are not using any "*" in your SELECT, then eliminate any unused fields and trim/limit strings where possible.

From MSDN forum:
You're hitting SQL's row size limit which is 8060 bytes (eg an 8K page). Using normal data types you cannot have a row which uses more than 8060 bytes, and while you can use a varchar to allow the smaller bits of data to offset the larger ones, with 468 columns of data you're looking at an average column width of 17.2 bytes.
If you convert varchar(x) to varchar(max) issue will be resolved.
Please also refer: How SQL server stores of the size of the row is greater than 8060 bytes Difference Between varchar(max) and varchar(8000)

Related

Pghero maximum query length in query stats

A couple of our longest running queries exceed 10,000 characters, but cannot be analysed through the pghero interface because they have been truncated to around 9980 characters.
I had a look through the code but couldn't see any limitations being imposed.
Is this limit derived from the postgresql track_activity_query_size value, perhaps?

Terdata Column size error exceeding 64k byte limit

We have a requirement to increase the existing 3 columns from VARCHAR(4096) to VARCHAR(11000), We pull the data from oracle table using informatica
and load it to teradata table.
As per my testing, Informatica job was successful till i made all 3 columns as VARCHAR(10001) in informatica and teradata. ( This table has other columns also).
But after 10001, if i increase the size,I was getting error
ERROR] Type:(Teradata DBS Error), Error: (The available data bytes in the table's perm row has exceeded the 64k byte limit.)
I tried giving CLOB datatype in target table for one column,even that was failing.
Found one of the posts in stackoverflow which talks about splitting the table because of the size limitation in teradata16
Row size limitation in Teradata
Could you please let me know if there is any option to do it without splitting the table ?

Oracle: How to convert row count into data size

I would like to know how do i convert number of row into size like in MB or KB?
Is there a way to do that or formula?
The reason I doing this is because I would like to know given this set of data but not all in tablespace, how much size is used by this set of data.
Thanks,
keith
If you want an estimate, you could multiple the row count with the information from user_table.avg_row_len for that table.
If you want the real size of the table on disk, this is available user_segments.bytes. Note that the smallest unit Oracle will use is a block. So even for an empty table, you will see a value that is bigger tzen zero in that column. That is actual size of the space reserved in the tablespace for that table.

Does it matter if I index a column > 900 bytes

So SQL server places a limit of 900 bytes on an index. I have columns that are NVARCHAR(1000) that I need to search on. Full text search of these columns is not required because search will always occur on the complete value or a prefix of the complete value. I will never need to search for terms that lie in the middle/end of the value.
The rows of the tables in question will never be updated predicated on this index, and the actual values that exceed 450 chars are outliers that will never be searched for.
Given the above, is there any reason not to ignore the warning:
The total size of an index or primary key cannot exceed 900 bytes
?
We shouldn't ignore the warning as any subsequent INSERT or UPDATE statement that specifies data values that generates a key value longer than 900 bytes will fail. Following link might help:
http://decipherinfosys.wordpress.com/2007/11/06/the-900-byte-index-limitation-in-sql-server/

Is there a way around the 8k row length limit in SQL Server?

First off, I know that in general having large numbers of wide columns is a bad idea, but this is the format I'm constrained to.
I have an application that imports CSV files into a staging table before manipulating them and inserting/updating values in the database. The staging table is created on the fly and has a variable number of NVARCHAR colums into which the file is imported, plus two INT columns used as row IDs.
One particular file I have to import is about 450 columns wide. With the 24 byte pointer used in a large NVARCHAR column, this adds up to around 10k by my calculations, and I get the error Cannot create a row of size 11166 which is greater than the allowable maximum row size of 8060.
Is there a way around this or are my only choices modifying the importer to split the import or removing columns from the file?
You can use text/ntext which uses 16 bytes pointer. Whereas varchar/nvarchar uses 24bytes pointer.
NVARCHAR(max) or NTEXT can store the data more than 8kb but a record size can not be greater than 8kb till SQL Server 2012. If Data is not fitted in 8kb page size then the data of larger column is moved to another page and a 24 bytes(if data type is varchar/nvarchar) pointer is used to store as reference pointer in main column. if it is text/ntext data type then 16 bytes poiner is used.
For Details you can Visit at following links :
Work around SQL Server maximum columns limit 1024 and 8kb record size
or
http://msdn.microsoft.com/en-us/library/ms186939(v=sql.90).aspx
If you are using SQL Server 2005, 2008 or 2012, you should be able to use NVARCHAR(max) or NTEXT which would be larger than 8,000 characters. MAX will give you 2^31 - 1 characters:
http://msdn.microsoft.com/en-us/library/ms186939(v=sql.90).aspx
I agree that Varchar or nvarchar (Max) is a good solution and will probably work for you, but completeness I will suggest that you can also create more than one table with the two tables having a One-to-One relationship.