How to convert column values to single row value in postgresql - sql

Data
id cust_name
1 walmart_ca
2 ikea_mo
2 ikea_ca
2 ikea_in
1 walmart_in
when i do
select id,cust_name from test where id=2
Query returns below output
id cust_name
2 ikea_mo
2 ikea_ca
2 ikea_in
How can i get or store the result as single column value as shown below
id cust_name
2 {ikea_mo,ikea_ca,ikea_in}

you should use string_agg function and here is an example for it
select string_agg(field_1,',') from db.schema.table;
you should mention the separator in your case its , so I am doing string_agg(field_1,',')

Related

How do I replace NULL in datediff with a text

I have a 2 different date columns and I'm using datediff to calculate the years of working. But there is nulls when the people is still working I wonder how can I replace it with a text 'Still Working'
So this is the table that I have
ID DateofHire DateofTermination
1 2011-07-05 NULL
2 2015-03-30 2016-06-16
3 2011-07-05 2012-09-24
4 2008-01-07 NULL
I have use the datediff formula and it shows like this :
DATEDIFF(year,DateofHire,DateofTermination) as 'Years of working'
ID Years_of_Working
1 NULL
2 1
3 1
4 NULL
How can I replace the NULL after I wrote a datediff query i wanted to replace it with 'Still Working'
The desired table would be like this:
ID Years_of_Working
1 Still working
2 1
3 1
4 Still Working
I assume that you will get Years_of_Working as Integer values. Replacing the attribute of the table with a different type is inconsistent.
I would just replace NULL values with 0, which would implicitly mean, that these people are still working.
So if the original table is called hire_info, then solution would look like this:
with diff_years as (
select ID,DATEDIFF(year,DateofHire,DateofTermination) as 'Years of working'
from hire_info
)
select ID, ISNULL(Years_of_Working,0) Years_of_Working
from diff_years
You can use this query. You should change the table name to your own table name.
SELECT id, if(isnull(DateofTermination),"Still working",TIMESTAMPDIFF(year,DateofHire,DateofTermination)) as 'Years of working' from date_diff;

Use JSON array in SQL server instead of traditional relational database, each value in 1 row

I have saved the answer values in a table in rows, 1 answer 1 row, 5 rows in this example.
If I migrate it to JSON it will be 2 rows(JSON)
Table
Id
Optionsid
Pid
Column
1
2
1
null
2
1
2
null
3
1
2
null
4
2
2
null
5
3
1
null
I want to calculate how many answers(pid) for each Optionsid with
SELECT COUNT(pid)AS Counted, OptionsId
FROM Answer GROUP BY [Column], OptionsId
Table Results
Counted
Optionsid
2
1
2
2
1
3
I have run thus query and saved it in a new table
select * from Answer for Json Auto
Json Table I added {"answer":} to the Json
id
pid
json
1
1
{"Answer":[{"Id":1,"Optionsid":2,"Pid":1}]}
2
2
{"Answer":[{"Id":2,"Optionsid":1,"Pid":2},{"Id":2,"Optionsid":1,"Pid":2},{"Id":3,"Optionsid":2,"Pid":2},{"Id":4,"Optionsid":3,"Pid":2}]}
I want to get the same result from Json Table as the Table result above, but I can get it to work
This Query only take the first[0] in the array, i want a query who take all values in the array.
Can someone help me with this query?
Select Count(Json_value ([json], '$.Answer[0].Pid')) as Counted,
Json_value ([json], '$.Answer[0].Optionsid') as OptionsId
from [PidJson]
group by Json_value ([json],'$.Answer[0].Column'),Json_value
([json],'$.Answer[0].Optionsid')
Here is a fiddle if you want to see
https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=0a2df33717a3917bae699ea3983b70b4
Here is the solution
SELECT Count(JsonData.Pid) as Counted,
JsonData.Optionsid
FROM PidJson AS RelationsTab
CROSS APPLY OPENJSON (RelationsTab.json,
N'$.Answer')
WITH (
Pid VARCHAR(200) N'$.Pid',
Optionsid VARCHAR(200) N'$.Optionsid',
ColumnValue INT N'$.Column'
) AS JsonData
Group by JsonData.ColumnValue, JsonData.Optionsid
Thanks for your time and that you "force" me to clearify my question and I find the solution

How to extract characters from a string stored as json data and place them in dynamic number of columns in SQL Server

I have a column of string in SQL Server that stores JSON data with all the braces and colons included.
My problem is to extract all the key and value pairs and store them in separate columns with the key as the column header. What makes this challenging is that every record has different number of key/value pairs.
For example in the image below showing 3 records, the first record has 5 key/value pairs- EndUseCommunityMarket of 2, EndUseProvincial Market of 0, and so on. The second record has 1 key/value pair, and the third record has two key/value pairs.
If I have to show how I want this in excel it would be like:
I have seen some SQL code examples that does something similar but for a fixed number of columns, unlike this one it varies for every record.
Please I need a SQL statement that can achieve this as I am working on thousands of records.
Below is this data copied from sql server:
catch_ext
{"NfdsFadMonitoring":{"EndUseEaten":1}}
{"NfdsFadMonitoring":{"EndUseCommunityMarket":3}}
{"NfdsFadMonitoring":{"SpeciesComment":"","EndUseCommunityMarket":2}}
{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":31}}
{"NfdsFadMonitoring":{"SpeciesComment":"10 fish with a total of 18kg","EndUseCommunityMarket":0,"EndUseProvincialMarket":0,"EndUseUrbanMarket":8,"EndUseEaten":1,"EndUseGivenAway":1}}
{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":18}}
I expect you don't want to dynamically create a table, instead you probably want to create a property mapping table. Here is a quick overview of the design.
Object table -- this stores the base information about your object
============
ID -- unique id field for every object.
Name
Property types table -- this stores all the property types
====================
Property_Type_ID -- unique type id
Description -- describes property
Object2Property -- stores the values for each property
===============
ObjectID -- the object
Property_Type_ID -- the property type
Value -- the value.
Using a model like this lets your properties be as dynamic as you wish but you don't have to create columns dynamically -- something that is hard and error prone.
using your specific example the tables would look like this
OBJECT
ID NAME
1 WHAOO
2 RED SNAMPPER
3 KAWAKAWA
Property Types
ID DESC
1 EndUseCommunityMarket
2 EndUseProvincialMarket
3 EndUseUrbanMarket
4 EndUseEaten
5 EndUseGivenAway
6 Comment
Map
ObjID TypeID Value
1 1 2
1 2 0
1 3 0
1 4 0
1 5 0
2 2 50
3 3 8
3 5 1
A. ROWS
Dynamic columns are a lot like rows.
You could use OPENJSON (Transact-SQL)
DECLARE #json2 NVARCHAR(4000) = N'{"NfdsFadMonitoring":{"SpeciesComment":"10 fish with a total of 18kg","EndUseCommunityMarket":0,"EndUseProvincialMarket":0,"EndUseUrbanMarket":8,"EndUseEaten":1,"EndUseGivenAway":1}}';
SELECT [key], value
FROM OPENJSON(#json2,'lax $.NfdsFadMonitoring')
Output
key value
SpeciesComment 10 fish with a total of 18kg
EndUseCommunityMarket 0
EndUseProvincialMarket 0
EndUseUrbanMarket 8
EndUseEaten 1
EndUseGivenAway 1
Your inputs
CREATE TABLE ForEloga (Id int,Json nvarchar(max));
Insert into ForEloga Values
(1,'{"NfdsFadMonitoring":{"EndUseEaten":1}}'),
(2,'{"NfdsFadMonitoring":{"EndUseCommunityMarket":3}}'),
(3,'{"NfdsFadMonitoring":{"SpeciesComment":"","EndUseCommunityMarket":2}}'),
(4,'{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":31}}'),
(5,'{"NfdsFadMonitoring":{"SpeciesComment":"10 fish with a total of 18kg","EndUseCommunityMarket":0,"EndUseProvincialMarket":0,"EndUseUrbanMarket":8,"EndUseEaten":1,"EndUseGivenAway":1}}'),
(6,'{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":18}}');
SELECT Id, [key], value
FROM ForEloga CROSS APPLY OPENJSON(Json,'lax $.NfdsFadMonitoring')
Output
Id key value
1 EndUseEaten 1
2 EndUseCommunityMarket 3
3 SpeciesComment
3 EndUseCommunityMarket 2
4 SpeciesComment mix reef fis
4 EndUseEaten 31
5 SpeciesComment 10 fish with a total of 18kg
5 EndUseCommunityMarket 0
5 EndUseProvincialMarket 0
5 EndUseUrbanMarket 8
5 EndUseEaten 1
5 EndUseGivenAway 1
6 SpeciesComment mix reef fis
6 EndUseEaten 18
B. COLUMNS: CROSS APPLY WITH WITH
If you know all possible properties then I recommend CROSS APPLY with WITHas shown in Example 3 - Join rows with JSON data stored in table cells using CROSS APPLY in OPENJSON (Transact-SQL).
SELECT store.title, location.street, location.lat, location.long
FROM store
CROSS APPLY OPENJSON(store.jsonCol, 'lax $.location')
WITH (
street varchar(500),
postcode varchar(500) '$.postcode',
lon int '$.geo.longitude',
lat int '$.geo.latitude'
) AS location
Try this:
Table Schema:
CREATE TABLE #JsonValue(sp_name VARCHAR(100),catch_ext VARCHAR(1000))
INSERT INTO #JsonValue VALUES ('WAHOO','{"NfdsFadMonitoring":{"EndUseEaten":1}}')
INSERT INTO #JsonValue VALUES ('RUBY SNAPPER','{"NfdsFadMonitoring":{"EndUseCommunityMarket":3}}')
INSERT INTO #JsonValue VALUES ('KAWAKAWA','{"NfdsFadMonitoring":{"SpeciesComment":"","EndUseCommunityMarket":2}}')
INSERT INTO #JsonValue VALUES ('XXXXXXXX','{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":31}}')
INSERT INTO #JsonValue VALUES ('YYYYYYYY','{"NfdsFadMonitoring":{"SpeciesComment":"10 fish with a total of 18kg","EndUseCommunityMarket":0,"EndUseProvincialMarket":0,"EndUseUrbanMarket":8,"EndUseEaten":1,"EndUseGivenAway":1}}')
INSERT INTO #JsonValue VALUES ('ZZZZZZZZZZ','{"NfdsFadMonitoring":{"SpeciesComment":"mix reef fis","EndUseEaten":18}}')
Query:
SELECT sp_name
,ISNULL(MAX(CASE WHEN [Key]='EndUseCommunityMarket' THEN Value END),'')EndUseCommunityMarket
,ISNULL(MAX(CASE WHEN [Key]='EndUseProvincialMarket' THEN Value END),'')EndUseProvincialMarket
,ISNULL(MAX(CASE WHEN [Key]='EndUseUrbanMarket' THEN Value END),'')EndUseUrbanMarket
,ISNULL(MAX(CASE WHEN [Key]='EndUseEaten' THEN Value END),'')EndUseEaten
,ISNULL(MAX(CASE WHEN [Key]='EndUseGivenAway' THEN Value END),'')EndUseGivenAway
FROM(
SELECT sp_name, [key], value
FROM #JsonValue CROSS APPLY OPENJSON(catch_ext,'$.NfdsFadMonitoring')
)D
GROUP BY sp_name
Output:
sp_name EndUseCommunityMarket EndUseProvincialMarket EndUseUrbanMarket EndUseEaten EndUseGivenAway
------------- --------------------- ---------------------- ----------------- ----------- ---------------
KAWAKAWA 2
RUBY SNAPPER 3
WAHOO 1
XXXXXXXX 31
YYYYYYYY 0 0 8 1 1
ZZZZZZZZZZ 18
Hope this will help you.

SQL Rows to Columns if column values are unknown

I have a table that has demographic information about a set of users which looks like this:
User_id Category IsMember
1 College 1
1 Married 0
1 Employed 1
1 Has_Kids 1
2 College 0
2 Married 1
2 Employed 1
3 College 0
3 Employed 0
The result set I want is a table that looks like this:
User_Id|College|Married|Employed|Has_Kids
1 1 0 1 1
2 0 1 1 0
3 0 0 0 0
In other words, the table indicates the presence or absence of a category for each user. Sometimes the user will have a category where the value if false, sometimes the user will have no row for a category, in which case IsMember is assumed to be false.
Also, from time to time additional categories will be added to the data set, and I'm wondering if its possible to do this query without knowing up front all the possible category names, in other words, I won't be able to specify all the column names I want to count in the result. (Note only user 1 has category "has_kids" and user 3 is missing a row for category "married"
(using Postgres)
Thanks.
You can use jsonb funcions.
with titles as (
select jsonb_object_agg(Category, Category) as titles,
jsonb_object_agg(Category, -1) as defaults
from demog
),
the_rows as (
select null::bigint as id, titles as data
from titles
union
select User_id, defaults || jsonb_object_agg(Category, IsMember)
from demog, titles
group by User_id, defaults
)
select id, string_agg(value, '|' order by key)
from (
select id, key, value
from the_rows, jsonb_each_text(data)
) x
group by id
order by id nulls first
You can see a running example in http://rextester.com/QEGT70842
You can replace -1 with 0 for the default value and '|' with ',' for the separator.
You can install tablefunc module and use the crosstab function.
https://www.postgresql.org/docs/9.1/static/tablefunc.html
I found a Postgres function script called colpivot here which does the trick. Ran the script to create the function, then created the table in one statement:
select colpivot ('_pivoted', 'select * from user_categories', array['user_id'],
array ['category'], '#.is_member', null);

Update column values with splited values

I have a sql table Product:
id name value type
1 a abs#123 1
2 b abs#123 2
3 c abs#123 1
How can I update the value column for the products type 1 with the value before # , meaning that abs#123 will be abs, so I have to split the value column by #?
Use Left and charindex function. Try this.
update Product
set value=left(value,charindex('#',value)-1)
where type=1
or use Substring
update Product
set value=substring(value,1,charindex('#',value)-1)
where type=1