SQL Server json truncated (even when using NVARCHAR(max) ) - sql

DECLARE #result NVARCHAR(max);
SET #result = (SELECT * FROM table
FOR JSON AUTO, ROOT('Data'))
SELECT #result;
This returns a json string of ~43000 characters, with some results truncated.
SET #result = (SELECT * FROM table
FOR JSON AUTO, ROOT('Data'))
This returns a json string of ~2000 characters. Is there any way to prevent any truncation? Even when dealing with some bigdata and the string is millions and millions of characters?

I didn't find and 'official' answer, but it seems that this is an error with the new 'FOR JSON' statement which is splitting the result in lines 2033 characters long.
As recommended here the best option so far is to iterate through the results concatenating the returned rows:
string result = "";
while (reader.Read())
{
result += Convert.ToString(reader[0]);
}
BTW, it seems that the latest versions of SSMS are already applying some kind of workaround like this to present the result in a single row.

I was able to get the full, non-truncated string by using print instead of select in SQL Server 2017 (version 14.0.2027):
DECLARE #result NVARCHAR(max);
SET #result = (SELECT * FROM table
FOR JSON AUTO, ROOT('Data'))
PRINT #result;
Another option would be to download and use Azure Data Studio which I think is a multi-platform re-write of SSMS (similar to how Visual Studio was re-written as VS Code). It seems to spit out the entire, non-truncated json string as expected out of the box!

This will also work if you insert into a temp table - not presenting does not apply the truncate of SSMS.
Might be usefull if you need to calculate several values.
declare #json table (j nvarchar(max));
insert into #json select * from(select* from Table where Criteria1 for json auto)a(j)
insert into #json select * from(select* from Table where Criteria2 for json auto)a(j)
select * from #json

I know this is an old thread but I have had success with this issue by sending the result to an XML variable. The advantage of using an XML variable is that the size is not stated as character length but by size of the string in memory, which can be changed in the options. Therefore Brad C's response would now look like...
DECLARE #result XML
SET #result = (SELECT * FROM table
FOR JSON AUTO, ROOT('Data'))
SELECT #result
or...
PRINT #result;

Here is the answer to JSON truncation:
SQL divides the JSON result into chunks 2k in size (at least my SQL 2016 installation does), one chunk in the first column of each row in the result set. To get the entire result, your client code has to loop through the result set and concatenate the first column of each record. When you've gotten to the end of the rows, voila, your entire JSON result is retrieved, uncut.
When I first encountered the truncation problem I was baffled, and wrote off FOR JSON for several years as an unserious feature suited only to the smallest of datasets. I learned that I need to read the entire recordset only from the FOR XML documentation, and never actually saw it mentioned in the FOR JSON docs.

The easiest workaround to avoid the truncation is to wrap the query in another select:
select (
<your query> FOR JSON PATH [or FOR JSON AUTO]
) as json

We've seen similar issues in SSMS, without using a variable SSMS truncates at 2033.
With a variable the query actually works OK when you use an nvarcahr(max) variable, but it truncates the output in the query results view at 43697.
A possible solution I've tested is outputting Query results to a file, using BCP:
bcp "DECLARE #result NVARCHAR(max); SET #result = (SELECT * FROM table FOR JSON AUTO, ROOT('Data')); SELECT #result as Result;" queryout "D:\tmp\exportOutput.txt" -S SQL_SERVER_NAME -T -w
See BCP docs for specifying server name\instance and authentication options

It's difficult to determine exactly what the problem you're having without posting the data, but I had a similar problem when I was attempting to export a query in JSON format. The solution that worked for me was to go to Query/Query Options/Results/Text/Set "Maximum number of characters displayed in each column:" to 8192 (max value AFAIK).
This probably won't help much with your first query, but that potentially could be broken into smaller queries and executed successfully. I would anticipate that you could effectively run your second query after changing that setting.

If your datalength is less than 65535 then you should use the suggestion of #dfundako who commented in the first post:
Try going to Tools, Options, Query Results, SQL Server, Results to Grid, and set Non-XML data to the max amount (I think 65535)
In my case the datalength was 21k characters so after exporting to grid I copied the value and it was fine, not truncated. Still it doesn't solve the the issue for those with bigger amount of data.

Try Visual Studio Code with Microsoft SQL extension. I got 6800 characters of JSON without truncation. It seems SSMS truncates results.

Related

Replace function SQL

I have problem that replace function does not work
DECLARE #Tabela nvarchar(25)
DECLARE #query nvarchar(max)
SET #Tabela = '_#tmp_tt2_POS_racuni_'
SET #query = 'SELECT * INTO '+#Tabela+((replace(convert(varchar(10), getdate(),121),'''-''',''''))+'-'+(replace(convert(nvarchar(10),getdate(),108),''':''','''')))+'NP'+' FROM _tabels'
PRINT #query
SELECT *
INTO _#tmp_tt2_POS_racuni_2021-12-21-11:15:27NP
FROM _tabels
Completion time: 2021-12-21T11:15:27.0724917+01:00
You should use FORMAT and specify the format you want directly instead of going through intermediate formats. For example :
select format(getdate(),'yyyyMMddhhmmss')
Produces 20211221124017. FORMAT is slower than CONVERT but in this case it's only called once. It's far more important to write a readable query that produces the correct result.
That said, it's probably better to use table partitioning instead of creating lots of temporary tables with a date in the name. All supported SQL Server versions and editions support partitioning, even LocalDB
The quotes you use are two too many.
You are using replace(date,''':''',''''). This will replace ':' with ''. However, the getdate() doesn't have quotes itself. I guess you did that because of the dynamic sql you are using - but for the dates, you should omit the quotes:
replace(date,':','')
Firstly, let's get onto the real problem that is discussed at lengths in the comments; this is a terrible idea.
The fact you want to create a table for an exact point in time smells very strongly of an XY Problem. What is the real problem you are trying to solve with this? Most likely what you really want is a partitioned table or a temporal table, so that you can query the data for an exact point in time. Which you need, we don't know, but I would suggest that you rethink your "solution" here.
As for the problem, it's working exactly as intended. Let's look at your REPLACE in solitude:
replace(convert(varchar(10), getdate(),121),'''-''','''')
So, in the above, you want to replace '-' (a hyphen wrapped in single quotes) with '' (2 single quotes). You don't want to replace a hyphen (-) with a zero length string; that would be REPLACE(..., '-','').
The style you are using, 121 gives the format yyyy-mm-dd hh:mi:ss.mmm, which doesn't contain a single single quote ('), so no wonder it isn't finding the pattern.
Though you don't need REPLACE on that date at all. YOu are taking the first 10 characters or the style and then removing the hyphens (-) to get yyyyMMdd, but there is already a style for that; style 112.
The above could be rewritten as:
DECLARE #Tabela sysname;
DECLARE #query nvarchar(max);
SET #Tabela = N'_#tmp_tt2_POS_racuni_';
SET #query = N'SELECT * INTO dbo.'+QUOTENAME(CONCAT(#Tabela,CONVERT(nvarchar(8),GETDATE(),112),,N'-'.REPLACE(CONVERT(nvarchar(10),GETDATE(),108),':',''),N'',N'NP')+N' FROM dbo._tabels;'
PRINT #query;

Issue with data population from XML

I am reading data from XML into a table. When I do select from the table, the table is empty.
SET #INPUTXML = CAST(#Attribute AS XML)
EXEC Sp_xml_preparedocument #TestDoc OUTPUT, #INPUTXML
SELECT Row_Number() OVER (ORDER BY Name) AS Row, *
INTO #tData
FROM OPENXML(#TestDoc, N'/DocumentElement/dtData')
WITH (
ID VARCHAR(100) './ID'
, Name VARCHAR(100) './Name'
, Value VARCHAR(max) './Value'
, Column VARCHAR(100) './Column'
)
EXEC Sp_xml_removedocument #TestDoc
Below are my questions:
select * from #tData is empty table. Why is data not getting populated?
What does Sp_xml_preparedocument do? When I print #TestDoc, it gives me a number
What is Sp_xml_removedocument ?
To answer your questions though.
#tData is empty because your SELECT statement returned no data. A SELECT...INTO statement will still create the table, even if the SELECT returns no rows. Why your SELECT is returning no data is impossible for us to say, because we have no sample data. If you remove the INTO clause you will see that no rows are returned, so you need to fix your SELECT, FROM, etc. but that brings on to my statement in a minute (about using XQUERY)
sp_xml_preparedocument (Transact-SQL) explains better than I could. Really though, you shouldn't be using it anymore, as it was used to read XML back in SQL Server 2000 (maybe 2005) and prior. Certainly SQL Server 2008 supported XQUERY, which you must be at least using if you are using SSMS 2014. To quote the opening statement of the documentation though:
Reads the XML text provided as input, parses the text by using the MSXML parser (Msxmlsql.dll), and provides the parsed document in a state ready for consumption. This parsed document is a tree representation of the various nodes in the XML document: elements, attributes, text, comments, and so on.
sp_xml_removedocument (Transact-SQL), but again, you should be using XQUERY.
Removes the internal representation of the XML document specified by the document handle and invalidates the document handle.

Performance issue Flattening XML in SQL using Nodes, value methods and cross apply

I am trying to flatten the XML into SQL table using below code. The input datatable (#incomingTable) contains 10k un typed XMLs the query takes 7 sec to return the output. When I checked the Execution Plan found most of the time is spent on "Table Vaued Functions (XML Reader with XPATH filter)" step. My guess is this step refers to Value method in query.
The value() method uses the Transact-SQL CONVERT operator implicitly and tries to convert the result of the XQuery expression, to the corresponding SQL type specified by Transact-SQL conversion.
Questions:Is there any other XML method to retrieve element/attribute value without data type converting . because I want the data as string anyhow. Helps me compare the results for two approaches.
Is there any other way to optimize this query?
select
sqlXml.value('#ID', 'varchar(50)') as XMLFieldName,
sqlXml.value('#TS', 'varchar(50)') as XMLTSValue,
sqlXml.value('.','varchar(800)') as XMLFieldValue
from #incomingTable
cross apply playfieldvalues.nodes('/PlayAttributes/PlayFields/PlayField') as
XMLData(sqlXml)
Try to use OPENXML:
DECLARE #idoc int;
EXEC sp_xml_preparedocument #idoc OUTPUT, #incomingTable;
SELECT *
FROM OPENXML (#idoc, '/PlayAttributes/PlayFields/PlayField',1)
WITH (XMLFieldName varchar(50) '#ID',
XMLTSValue varchar(50) '#TS',
XMLFieldValue varchar(800) '.');
EXEC sp_xml_removedocument #idoc;
OPENXML allows accessing XML data as if it were a relational recordset. It provides a tabular (rowset) view of in-memory representation of an XML document. Technically, OPENXML is a rowset provider similar to a table or a view; hence it can be used wherever a table or a view is used. For instance, you can use OPENXML with SELECT or SELECT INTO statements, to work on an XML document that is retained in memory.
Source

BLOB to String, SQL Server

I have a text string stored as a BLOB data type in a database. I want to extract it by an SQL select query, but I have problems converting/casting from BLOB to readable text.
I've tried e.g.
select convert(nvarchar(40),convert(varbinary(40),BLOBTextToExtract))
from [NavisionSQL$Customer]
I guess I need something similar, but I can't figure out exactly what I need to do the conversion. Can somebody please give me some directions?
Regards
The accepted answer works for me only for the first 30 characters.
This works for me:
select convert(varchar(max), convert(varbinary(max),myBlobColumn)) FROM table_name
Problem was apparently not the SQL server, but the NAV system that updates the field. There is a compression property that can be used on BLOB fields in NAV, that is not a part of SQL Server. So the custom compression made the data unreadable, though the conversion worked.
The solution was to turn off compression through the Object Designer, Table Designer, Properties for the field (Shift+F4 on the field row).
After that the extraction of data can be made with e.g.:
select convert(varchar(max), cast(BLOBFIELD as binary))
from Table
Thanks for all answers that were correct in many ways!
It depends on how the data was initially put into the column. Try either of these as one should work:
SELECT CONVERT(NVarChar(40), BLOBTextToExtract)
FROM [NavisionSQL$Customer];
Or if it was just varchar...
SELECT CONVERT(VarChar(40), BLOBTextToExtract)
FROM [NavisionSQL$Customer];
I used this script to verify and test on SQL Server 2K8 R2:
DECLARE #blob VarBinary(MAX) = CONVERT(VarBinary(MAX), 'test');
-- show the binary representation
SELECT #blob;
-- this doesn't work
SELECT CONVERT(NVarChar(100), #blob);
-- but this does
SELECT CONVERT(VarChar(100), #blob);
Can you try this:
select convert(nvarchar(max),convert(varbinary(max),blob_column)) from table_name
Found this...
bcp "SELECT top 1 BlobText FROM TableName" queryout "C:\DesinationFolder\FileName.txt" -T -c'
If you need to know about different options of bcp flags...
http://msdn.microsoft.com/en-us/library/ms162802.aspx
CREATE OR REPLACE FUNCTION HASTANE.getXXXXX(p_rowid in rowid) return VARCHAR2
as
l_data long;
begin
select XXXXXX into l_data from XXXXX where rowid = p_rowid;
return substr( l_data, 1, 4000);
end getlabrapor1;

sql- exceeding variable size in a exec?

I inherited some partially complete sql code that I can't get to work.
it accesses multiple databases, so it first searches for proper database using a userID number, then inserts that database name into a query. the part i'm having a problem with (extremely abbreviated) is...
DECLARE #sql AS VARCHAR(8000)
SET #sql = 'INSERT INTO ['+#DatabaseName+'].dbo.[customer]
( -- containing about 200 columns. )
VALUES(...)'
PRINT #sql
EXEC(#sql)
i would get errors in the middle of a column name, sometimes saying it's expecting a parenthesis or quote. i started deleting white space so that, ie, [first name],[last name] were on the same line and not two different lines and that would get me a little further down the query. i don't have much more white spaces i can delete and i'm only just getting into the Values(...) portion of it. the weird thing is. i copy and pasted just the columns portion and put it into Word and it comes up as being only about 3,000 characters, including white space.
am i missing something?
if it means anything, i'm running microsoft sql server 2005, and using the sql server management studio for editing
thanks!
See here: SQL Server: When 8000 Characters Is Not Enough for a couple of solutions
extremely abbreviated
Well, that doesn't really help since you have likely abbreviated away the cause of the issue.
If I were to guess, I have seen cases where NCHAR or CHAR variables/columns were involved. These expand to their full length when used in string concatenation and it will cause the final statement to be too long.
For what it's worth for style or otherwise, use NVarchar(Max) always for SQL Server 2005 and onwards. In fact, that is the expected type if you use sp_executesql.
If you check for fixed-width N/CHAR columns and switch to nvarchar(max), you may see the problem go away.
EDIT: Test showing NVarchar(Max) holding well in excess of 8000 bytes.
declare #sql nvarchar(max)
-- this CTE sets up the columns, 1 as field1, 2 as field2 etc
-- it creates 2000 columns
;with CTE(n, t) AS (
select 1, convert(nvarchar(max),'1 as field1')
union all
select n+1, convert(nvarchar(max),RIGHT(n, 12) + ' as field'+RIGHT(n, 12))
from cte
where N < 2000)
select #sql = coalesce(#sql+',','') + t
from CTE
option (maxrecursion 2000) -- needed, the default of 100 is not nearly enough
-- add the SELECT bit to make a proper SQL statement
set #sql = 'select ' + #sql
-- check the length : 33786
select LEN(#sql)
-- check the content
print #sql
-- execute to get the columns
exec (#sql)
Use an nvarchar(max) datatype for #sql.