Default charset in base is Windows1251, but when I use "for json" statement, result presents in "utf8"
Is it possible convert some column to ut8, using sql?
I try this code
select
1 as "tag",
null as "parent",
"period" as "!1!period",
"nazva" as "!1!nazva",
"DogovorNumber" as "!1!DogovorNumber"
from "dba"."Myk_Orgs_for_1C"(#cmonth = 3,#cyear = 2022)
order by 3 asc for json explicit
But my columns values shown like this
"nazva":"ГОРОЯН НАIРА"
You can workaround the encoding issue with output statement:
select
1 as "tag",
null as "parent",
"period" as "!1!period",
"nazva" as "!1!nazva",
"DogovorNumber" as "!1!DogovorNumber"
from "dba"."Myk_Orgs_for_1C"(#cmonth = 3,#cyear = 2022)
order by 3 asc for json explicit
;
output to 'C:\\out.json' format text escapes on escape character '\' delimited by '' encoding 'CP-1251'
Related
I am crawling data from Google Big Query and staging them into Athena.
One of the columns crawled as string, contains json :
{
"key": "Category",
"value": {
"string_value": "something"
}
I need to unnest these and flatten them to be able to use them in a query. I require key and string value (so in my query it will be where Category = something
I have tried the following :
WITH dataset AS (
SELECT cast(json_column as json) as json_column
from "thedatabase"
LIMIT 10
)
SELECT
json_extract_scalar(json_column, '$.value.string_value') AS string_value
FROM dataset
which is returning null.
Casting the json_column as json adds \ into them :
"[{\"key\":\"something\",\"value\":{\"string_value\":\"app\"}}
If I use replace on the json, it doesn't allow me as it's not a varchar object.
So how do I extract the values from the some_column field?
Presto's json_extract_scalar actually supports extracting just from the varchar (string) value :
-- sample data
WITH dataset(json_column) AS (
values ('{
"key": "Category",
"value": {
"string_value": "something"
}}')
)
--query
SELECT
json_extract_scalar(json_column, '$.value.string_value') AS string_value
FROM dataset;
Output:
string_value
something
Casting to json will encode data as json (in case of string you will get a double encoded one), not parse it, use json_parse (in this particular case it is not needed, but there are cases when you will want to use it):
-- query
SELECT
json_extract_scalar(json_parse(json_column), '$.value.string_value') AS string_value
FROM dataset;
i have a PostgreSQL table containing a column with key value pairs ("KeyValuePairs" of text) separated by a semicolon character:
Id
Name
KeyValuePairs
1
A
Key1:Value1;Key2:Value2;Key3:Value3
2
B
Key10:Value10;Key11:Value11
3
C
Key20:Value20;Key21:Value21;Key22:Value22;Key23:Value23;Key24:Value24
How do i convert the text of KeyValuePairs into a JSON syntax (still of type text, not JSON) using a SQL query?
The expected result is:
Id
Name
KeyValuePairs
1
A
{ "Key1": "Value1", "Key2": "Value2", "Key3": "Value3" }
2
B
{ "Key10": "Value10", "Key11": "Value11"
3
C
etc.
Update: Solution
Using the solution from Serg i was able to solve it, see SQL Fiddle
You can replace the delimiters
select Id,Name, '{"' || replace(replace(KeyValuePairs, ':', '":"'),';','","') || '"}' KeyValuePairs
from yourtable
Use Json Object and parse that string into json
JObject json = JObject.Parse(str);
You might want to refer to Json.NET documentation
I have column A with value hello.
I need to migrate it into new column AJson with value ["hello"].
I have to do this with Sql Server command.
There are different commands FOR JSON etc. but they serialize value with column name.
This is the same value that C# method JsonConvert.SerializeObject(new List<string>(){"hello"} serialization result would be.
I can't simply attach [" in the beginning and end because the string value may contain characters which without proper serialization will break the json string.
My advice is you just make a lot of nested replaces and then do it yourself.
FOR JSON is intended for entire JSON, and therefore not valid without keys.
Here is a simple example that replaces the endline with \n
print replace('ab
c','
','\n')
Backspace to be replaced with \b.
Form feed to be replaced with \f.
Newline to be replaced with \n.
Carriage return to be replaced with \r.
Tab to be replaced with \t.
Double quote to be replaced with "
Backslash to be replaced with \
My approach was to use these 3 commands:
UPDATE Offers
SET [DetailsJson] =
(SELECT TOP 1 [Details] AS A
FROM Offers AS B
WHERE B.Id = Offers.Id
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
UPDATE Offers
SET [DetailsJson] = Substring([DetailsJson], 6, LEN([DetailsJson]) - 6)
UPDATE Offers
SET [DetailsJson] = '[' + [DetailsJson] + ']'
..for op's answer/table..
UPDATE Offers
SET [DetailsJson] = concat(N'["', string_escape([Details], 'json'), N'"]');
declare #col nvarchar(100) = N'
a b c " : [ ] ]
x
y
z'
select concat(N'["', string_escape(#col, 'json'), N'"]'), isjson(concat(N'["', string_escape(#col, 'json'), N'"]'));
I am currently working through Step 2 of this tutorial from Snowflake but am using my own JSON, stored in a column I'll call my_column in a table called my_table:
https://docs.snowflake.com/en/user-guide/json-basics-tutorial-query.html
The JSON file I am using has a key that contains the '#' character.
Example:
"#characteristics": {
"XXX": "XXXX",
"YYY": "YYYY",
"ZZZ": "ZZZZ"
}
When I try to use a SELECT statement that includes the FLATTEN function, ie something like this
select
value:xxx::number
from
my_table
, lateral flatten( input => my_column:#characteristics);
When I try this, SnowFlake gives me the error 'SQL compilation error: syntax error line 3 at position 57 unexpected '#characteristics'.' I have tried to escape the '#' character in front of attributes but have not had any luck.
You need to use double quotes. Try this one:
create table my_table ( my_column variant ) as
select parse_json ( $$ { "#characteristics": {
"XXX": "XXXX",
"YYY": "YYYY",
"ZZZ": "ZZZZ"
}} $$ );
select
value
from my_table
,lateral flatten( input => my_column:"#characteristics");
I have a column stored in JSON that looks like
column name: s2s_payload
Values:
{
"checkoutdate":"2019-10-31",
"checkindate":"2019-10-30",
"numtravelers":"2",
"domain":"www.travel.com.mx",
"destination": {
"country":"MX",
"city":"Manzanillo"
},
"eventtype":"search",
"vertical":"hotels"
}
I want to query exact values in the array rather than returning all values for a certain data type. I was using JSON_EXTRACT to get distinct counts.
SELECT
COUNT(JSON_EXTRACT(s2s_payload, '$.destination.code')) AS total,
JSON_EXTRACT(s2s_payload, '$.destination.code') AS destination
FROM
"db"."events_data_json5_temp"
WHERE
id = '111000'
AND s2s_payload IS NOT NULL
AND yr = '2019'
AND mon = '10'
AND dt >= '26'
AND JSON_EXTRACT(s2s_payload, '$.destination.code')
GROUP BY
JSON_EXTRACT(s2s_payload, '$.destination.code')
If I want to filter where ""eventtype"":""search"" how can I do this?
I tried using CAST(s2s_payload AS CHAR) = '{"eventtype"":""search"}' but that didn't work.
You need to use json_extract + a CAST to get actual value to compare against:
CAST(json_extract(s2s_payload, '$.eventtype') AS varchar) = 'search'
or, same with json_extract_scalar (and thus with no need for a CAST):
json_extract_scalar(s2s_payload, '$.eventtype')