Use JSON array in SQL server instead of traditional relational database, each value in 1 row - sql

I have saved the answer values in a table in rows, 1 answer 1 row, 5 rows in this example.
If I migrate it to JSON it will be 2 rows(JSON)
Table
Id
Optionsid
Pid
Column
1
2
1
null
2
1
2
null
3
1
2
null
4
2
2
null
5
3
1
null
I want to calculate how many answers(pid) for each Optionsid with
SELECT COUNT(pid)AS Counted, OptionsId
FROM Answer GROUP BY [Column], OptionsId
Table Results
Counted
Optionsid
2
1
2
2
1
3
I have run thus query and saved it in a new table
select * from Answer for Json Auto
Json Table I added {"answer":} to the Json
id
pid
json
1
1
{"Answer":[{"Id":1,"Optionsid":2,"Pid":1}]}
2
2
{"Answer":[{"Id":2,"Optionsid":1,"Pid":2},{"Id":2,"Optionsid":1,"Pid":2},{"Id":3,"Optionsid":2,"Pid":2},{"Id":4,"Optionsid":3,"Pid":2}]}
I want to get the same result from Json Table as the Table result above, but I can get it to work
This Query only take the first[0] in the array, i want a query who take all values in the array.
Can someone help me with this query?
Select Count(Json_value ([json], '$.Answer[0].Pid')) as Counted,
Json_value ([json], '$.Answer[0].Optionsid') as OptionsId
from [PidJson]
group by Json_value ([json],'$.Answer[0].Column'),Json_value
([json],'$.Answer[0].Optionsid')
Here is a fiddle if you want to see
https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=0a2df33717a3917bae699ea3983b70b4

Here is the solution
SELECT Count(JsonData.Pid) as Counted,
JsonData.Optionsid
FROM PidJson AS RelationsTab
CROSS APPLY OPENJSON (RelationsTab.json,
N'$.Answer')
WITH (
Pid VARCHAR(200) N'$.Pid',
Optionsid VARCHAR(200) N'$.Optionsid',
ColumnValue INT N'$.Column'
) AS JsonData
Group by JsonData.ColumnValue, JsonData.Optionsid
Thanks for your time and that you "force" me to clearify my question and I find the solution

Related

How to convert column values to single row value in postgresql

Data
id cust_name
1 walmart_ca
2 ikea_mo
2 ikea_ca
2 ikea_in
1 walmart_in
when i do
select id,cust_name from test where id=2
Query returns below output
id cust_name
2 ikea_mo
2 ikea_ca
2 ikea_in
How can i get or store the result as single column value as shown below
id cust_name
2 {ikea_mo,ikea_ca,ikea_in}
you should use string_agg function and here is an example for it
select string_agg(field_1,',') from db.schema.table;
you should mention the separator in your case its , so I am doing string_agg(field_1,',')

How to extract characters from a string stored as json data and place them in dynamic number of columns in SQL Server

I have a column of string in SQL Server that stores JSON data with all the braces and colons included.
My problem is to extract all the key and value pairs and store them in separate columns with the key as the column header. What makes this challenging is that every record has different number of key/value pairs.
For example in the image below showing 3 records, the first record has 5 key/value pairs- EndUseCommunityMarket of 2, EndUseProvincial Market of 0, and so on. The second record has 1 key/value pair, and the third record has two key/value pairs.
If I have to show how I want this in excel it would be like:
I have seen some SQL code examples that does something similar but for a fixed number of columns, unlike this one it varies for every record.
Please I need a SQL statement that can achieve this as I am working on thousands of records.
Below is this data copied from sql server:
catch_ext
{"NfdsFadMonitoring":{"EndUseEaten":1}}
{"NfdsFadMonitoring":{"EndUseCommunityMarket":3}}
{"NfdsFadMonitoring":{"SpeciesComment":"","EndUseCommunityMarket":2}}
{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":31}}
{"NfdsFadMonitoring":{"SpeciesComment":"10 fish with a total of 18kg","EndUseCommunityMarket":0,"EndUseProvincialMarket":0,"EndUseUrbanMarket":8,"EndUseEaten":1,"EndUseGivenAway":1}}
{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":18}}
I expect you don't want to dynamically create a table, instead you probably want to create a property mapping table. Here is a quick overview of the design.
Object table -- this stores the base information about your object
============
ID -- unique id field for every object.
Name
Property types table -- this stores all the property types
====================
Property_Type_ID -- unique type id
Description -- describes property
Object2Property -- stores the values for each property
===============
ObjectID -- the object
Property_Type_ID -- the property type
Value -- the value.
Using a model like this lets your properties be as dynamic as you wish but you don't have to create columns dynamically -- something that is hard and error prone.
using your specific example the tables would look like this
OBJECT
ID NAME
1 WHAOO
2 RED SNAMPPER
3 KAWAKAWA
Property Types
ID DESC
1 EndUseCommunityMarket
2 EndUseProvincialMarket
3 EndUseUrbanMarket
4 EndUseEaten
5 EndUseGivenAway
6 Comment
Map
ObjID TypeID Value
1 1 2
1 2 0
1 3 0
1 4 0
1 5 0
2 2 50
3 3 8
3 5 1
A. ROWS
Dynamic columns are a lot like rows.
You could use OPENJSON (Transact-SQL)
DECLARE #json2 NVARCHAR(4000) = N'{"NfdsFadMonitoring":{"SpeciesComment":"10 fish with a total of 18kg","EndUseCommunityMarket":0,"EndUseProvincialMarket":0,"EndUseUrbanMarket":8,"EndUseEaten":1,"EndUseGivenAway":1}}';
SELECT [key], value
FROM OPENJSON(#json2,'lax $.NfdsFadMonitoring')
Output
key value
SpeciesComment 10 fish with a total of 18kg
EndUseCommunityMarket 0
EndUseProvincialMarket 0
EndUseUrbanMarket 8
EndUseEaten 1
EndUseGivenAway 1
Your inputs
CREATE TABLE ForEloga (Id int,Json nvarchar(max));
Insert into ForEloga Values
(1,'{"NfdsFadMonitoring":{"EndUseEaten":1}}'),
(2,'{"NfdsFadMonitoring":{"EndUseCommunityMarket":3}}'),
(3,'{"NfdsFadMonitoring":{"SpeciesComment":"","EndUseCommunityMarket":2}}'),
(4,'{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":31}}'),
(5,'{"NfdsFadMonitoring":{"SpeciesComment":"10 fish with a total of 18kg","EndUseCommunityMarket":0,"EndUseProvincialMarket":0,"EndUseUrbanMarket":8,"EndUseEaten":1,"EndUseGivenAway":1}}'),
(6,'{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":18}}');
SELECT Id, [key], value
FROM ForEloga CROSS APPLY OPENJSON(Json,'lax $.NfdsFadMonitoring')
Output
Id key value
1 EndUseEaten 1
2 EndUseCommunityMarket 3
3 SpeciesComment
3 EndUseCommunityMarket 2
4 SpeciesComment mix reef fis
4 EndUseEaten 31
5 SpeciesComment 10 fish with a total of 18kg
5 EndUseCommunityMarket 0
5 EndUseProvincialMarket 0
5 EndUseUrbanMarket 8
5 EndUseEaten 1
5 EndUseGivenAway 1
6 SpeciesComment mix reef fis
6 EndUseEaten 18
B. COLUMNS: CROSS APPLY WITH WITH
If you know all possible properties then I recommend CROSS APPLY with WITHas shown in Example 3 - Join rows with JSON data stored in table cells using CROSS APPLY in OPENJSON (Transact-SQL).
SELECT store.title, location.street, location.lat, location.long
FROM store
CROSS APPLY OPENJSON(store.jsonCol, 'lax $.location')
WITH (
street varchar(500),
postcode varchar(500) '$.postcode',
lon int '$.geo.longitude',
lat int '$.geo.latitude'
) AS location
Try this:
Table Schema:
CREATE TABLE #JsonValue(sp_name VARCHAR(100),catch_ext VARCHAR(1000))
INSERT INTO #JsonValue VALUES ('WAHOO','{"NfdsFadMonitoring":{"EndUseEaten":1}}')
INSERT INTO #JsonValue VALUES ('RUBY SNAPPER','{"NfdsFadMonitoring":{"EndUseCommunityMarket":3}}')
INSERT INTO #JsonValue VALUES ('KAWAKAWA','{"NfdsFadMonitoring":{"SpeciesComment":"","EndUseCommunityMarket":2}}')
INSERT INTO #JsonValue VALUES ('XXXXXXXX','{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":31}}')
INSERT INTO #JsonValue VALUES ('YYYYYYYY','{"NfdsFadMonitoring":{"SpeciesComment":"10 fish with a total of 18kg","EndUseCommunityMarket":0,"EndUseProvincialMarket":0,"EndUseUrbanMarket":8,"EndUseEaten":1,"EndUseGivenAway":1}}')
INSERT INTO #JsonValue VALUES ('ZZZZZZZZZZ','{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":18}}')
Query:
SELECT sp_name
,ISNULL(MAX(CASE WHEN [Key]='EndUseCommunityMarket' THEN Value END),'')EndUseCommunityMarket
,ISNULL(MAX(CASE WHEN [Key]='EndUseProvincialMarket' THEN Value END),'')EndUseProvincialMarket
,ISNULL(MAX(CASE WHEN [Key]='EndUseUrbanMarket' THEN Value END),'')EndUseUrbanMarket
,ISNULL(MAX(CASE WHEN [Key]='EndUseEaten' THEN Value END),'')EndUseEaten
,ISNULL(MAX(CASE WHEN [Key]='EndUseGivenAway' THEN Value END),'')EndUseGivenAway
FROM(
SELECT sp_name, [key], value
FROM #JsonValue CROSS APPLY OPENJSON(catch_ext,'$.NfdsFadMonitoring')
)D
GROUP BY sp_name
Output:
sp_name EndUseCommunityMarket EndUseProvincialMarket EndUseUrbanMarket EndUseEaten EndUseGivenAway
------------- --------------------- ---------------------- ----------------- ----------- ---------------
KAWAKAWA 2
RUBY SNAPPER 3
WAHOO 1
XXXXXXXX 31
YYYYYYYY 0 0 8 1 1
ZZZZZZZZZZ 18
Hope this will help you.

How to store comma-separated values row by row in postgresql

Input:
('{"user":{"status":1,"loginid":1,"userids":{"userid":"5,6"}}}')
I want to insert into my table like this:
userid loginid status
---------------------------
5 1 1
6 1 1
Use regexp_split_to_table(). Assuming that the columns are integers:
with input_data(data) as (
values
('{"user":{"status":1,"loginid":1,"userids":{"userid":"5,6"}}}'::json)
)
-- insert into my_table(userid, loginid, status)
select
regexp_split_to_table(data->'user'->'userids'->>'userid', ',')::int as userid,
(data->'user'->>'loginid')::int as loginid,
(data->'user'->>'status')::int as status
from input_data
userid | loginid | status
--------+---------+--------
5 | 1 | 1
6 | 1 | 1
(2 rows)
Would be simpler with an array (JSON array) to begin with. The you can use json_array_elements_text(json). See:
How to turn json array into postgres array?
Convert the list you have to an array with string_to_array(). Then unnest().
SELECT unnest(string_to_array(js#>>'{user,userids,userid}', ',')) AS userid
, (js#>>'{user,loginid}')::int AS loginid
, (js#>>'{user,status}')::int AS status
FROM (
SELECT json '{"user":{"status":1,"loginid":1,"userids":{"userid":"5,6"}}}'
) i(js);
db<>fiddle here
I advise Postgres 10 or later for the simple form with unnest() in the SELECT list. See:
What is the expected behaviour for multiple set-returning functions in select clause?
I avoid regexp functions for simple tasks. Those are powerful, but substantially more expensive.

SQL Rows to Columns if column values are unknown

I have a table that has demographic information about a set of users which looks like this:
User_id Category IsMember
1 College 1
1 Married 0
1 Employed 1
1 Has_Kids 1
2 College 0
2 Married 1
2 Employed 1
3 College 0
3 Employed 0
The result set I want is a table that looks like this:
User_Id|College|Married|Employed|Has_Kids
1 1 0 1 1
2 0 1 1 0
3 0 0 0 0
In other words, the table indicates the presence or absence of a category for each user. Sometimes the user will have a category where the value if false, sometimes the user will have no row for a category, in which case IsMember is assumed to be false.
Also, from time to time additional categories will be added to the data set, and I'm wondering if its possible to do this query without knowing up front all the possible category names, in other words, I won't be able to specify all the column names I want to count in the result. (Note only user 1 has category "has_kids" and user 3 is missing a row for category "married"
(using Postgres)
Thanks.
You can use jsonb funcions.
with titles as (
select jsonb_object_agg(Category, Category) as titles,
jsonb_object_agg(Category, -1) as defaults
from demog
),
the_rows as (
select null::bigint as id, titles as data
from titles
union
select User_id, defaults || jsonb_object_agg(Category, IsMember)
from demog, titles
group by User_id, defaults
)
select id, string_agg(value, '|' order by key)
from (
select id, key, value
from the_rows, jsonb_each_text(data)
) x
group by id
order by id nulls first
You can see a running example in http://rextester.com/QEGT70842
You can replace -1 with 0 for the default value and '|' with ',' for the separator.
You can install tablefunc module and use the crosstab function.
https://www.postgresql.org/docs/9.1/static/tablefunc.html
I found a Postgres function script called colpivot here which does the trick. Ran the script to create the function, then created the table in one statement:
select colpivot ('_pivoted', 'select * from user_categories', array['user_id'],
array ['category'], '#.is_member', null);

Getting a comma-delimited list of PK's for duplicates of a record in SQL Server 2005?

This is an off-shoot of a previous question I had: A little fuzzy on getting DISTINCT on one column?
This query makes a little more sense, given the data:
SELECT Receipts.ReceiptID, FolderLink.ReceiptFolderID
FROM dbo.tbl_ReceiptFolderLnk AS FolderLink INNER JOIN
dbo.tbl_Receipt AS Receipts ON FolderLink.ReceiptID = Receipts.ReceiptID
With results:
ReceiptID ReceiptFolderID NewColumn (duplicate folder ID list)
-------------------- --------------- ----------
1 3
2 3
3 7
4 <---> 4 8,9
5 4
6 1
3 8
4 <---> 8 4,9
4 <---> 9 4,8
That answer provided me to view distinct(ReceiptID)'s... great. Now, for those ID's, 3 and 4, they exist in multiple ReceiptFolderID's.
Given this NON-unique list of ReceiptID's, I'd like an additional column, of comma-delimited ReceiptFolderLinkID's where the ReceiptID also exists.
So for ReceiptID=4, the new column, say, DuplicateFoldersList, should read, "8,9", etc, and similar with ID=3, or any other duplicates.
So basically, I'd like another column to indicate the ReceiptFolderID's additional occurrences of ReceiptID in other folders.
Thanks!
You can create a function that, given a ReceiptID and the "current" ReceiptFolderID for that row, returns the other ReceiptFolderIDs as a concatenated, comma-delimited list. Example:
CREATE FUNCTION [dbo].[GetOtherReceiptFolderIDs](#receiptID int, #receiptFolderID int)
RETURNS varchar(MAX) AS
BEGIN
DECLARE #returnValue varchar(MAX)
SELECT #returnValue = COALESCE(#returnValue + ', ', '') + COALESCE(CONVERT(varchar(MAX), ReceiptFolderID), '')
FROM tbl_ReceiptFolderLink AS FolderLink
WHERE FolderLink.ReceiptID = #receiptID
AND FolderLink.ReceiptFolderID <> #receiptFolderID
RETURN #returnValue
END
Then, you can run a query that uses this function to obtain your new column:
SELECT Receipts.ReceiptID, ReceiptFolderID, dbo.GetOtherReceiptFolderIDs(Receipts.ReceiptID, ReceiptFolderID) AS NewColumn
FROM tbl_Receipt AS Receipts
INNER JOIN tbl_ReceiptFolderLink AS FolderLinks
ON Receipts.ReceiptID = FolderLinks.ReceiptID
I tested this and it produces the following results (if I got your schema correctly):
ReceiptID ReceiptFolderID NewColumn
6 1 NULL
1 3 NULL
2 3 NULL
4 4 8, 9
5 4 NULL
3 7 8
3 8 7
4 8 4, 9
4 9 4, 8
In Mysql there is group_concat aggregate function, but in T-SQL and oracle you need to use another approach... This site lists multiple approaches for T-SQL, but none are very simple and easy (as mysql is)