Input:
('{"user":{"status":1,"loginid":1,"userids":{"userid":"5,6"}}}')
I want to insert into my table like this:
userid loginid status
---------------------------
5 1 1
6 1 1
Use regexp_split_to_table(). Assuming that the columns are integers:
with input_data(data) as (
values
('{"user":{"status":1,"loginid":1,"userids":{"userid":"5,6"}}}'::json)
)
-- insert into my_table(userid, loginid, status)
select
regexp_split_to_table(data->'user'->'userids'->>'userid', ',')::int as userid,
(data->'user'->>'loginid')::int as loginid,
(data->'user'->>'status')::int as status
from input_data
userid | loginid | status
--------+---------+--------
5 | 1 | 1
6 | 1 | 1
(2 rows)
Would be simpler with an array (JSON array) to begin with. The you can use json_array_elements_text(json). See:
How to turn json array into postgres array?
Convert the list you have to an array with string_to_array(). Then unnest().
SELECT unnest(string_to_array(js#>>'{user,userids,userid}', ',')) AS userid
, (js#>>'{user,loginid}')::int AS loginid
, (js#>>'{user,status}')::int AS status
FROM (
SELECT json '{"user":{"status":1,"loginid":1,"userids":{"userid":"5,6"}}}'
) i(js);
db<>fiddle here
I advise Postgres 10 or later for the simple form with unnest() in the SELECT list. See:
What is the expected behaviour for multiple set-returning functions in select clause?
I avoid regexp functions for simple tasks. Those are powerful, but substantially more expensive.
Related
I have saved the answer values in a table in rows, 1 answer 1 row, 5 rows in this example.
If I migrate it to JSON it will be 2 rows(JSON)
Table
Id
Optionsid
Pid
Column
1
2
1
null
2
1
2
null
3
1
2
null
4
2
2
null
5
3
1
null
I want to calculate how many answers(pid) for each Optionsid with
SELECT COUNT(pid)AS Counted, OptionsId
FROM Answer GROUP BY [Column], OptionsId
Table Results
Counted
Optionsid
2
1
2
2
1
3
I have run thus query and saved it in a new table
select * from Answer for Json Auto
Json Table I added {"answer":} to the Json
id
pid
json
1
1
{"Answer":[{"Id":1,"Optionsid":2,"Pid":1}]}
2
2
{"Answer":[{"Id":2,"Optionsid":1,"Pid":2},{"Id":2,"Optionsid":1,"Pid":2},{"Id":3,"Optionsid":2,"Pid":2},{"Id":4,"Optionsid":3,"Pid":2}]}
I want to get the same result from Json Table as the Table result above, but I can get it to work
This Query only take the first[0] in the array, i want a query who take all values in the array.
Can someone help me with this query?
Select Count(Json_value ([json], '$.Answer[0].Pid')) as Counted,
Json_value ([json], '$.Answer[0].Optionsid') as OptionsId
from [PidJson]
group by Json_value ([json],'$.Answer[0].Column'),Json_value
([json],'$.Answer[0].Optionsid')
Here is a fiddle if you want to see
https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=0a2df33717a3917bae699ea3983b70b4
Here is the solution
SELECT Count(JsonData.Pid) as Counted,
JsonData.Optionsid
FROM PidJson AS RelationsTab
CROSS APPLY OPENJSON (RelationsTab.json,
N'$.Answer')
WITH (
Pid VARCHAR(200) N'$.Pid',
Optionsid VARCHAR(200) N'$.Optionsid',
ColumnValue INT N'$.Column'
) AS JsonData
Group by JsonData.ColumnValue, JsonData.Optionsid
Thanks for your time and that you "force" me to clearify my question and I find the solution
I need to split text elements in an array and combine the elements (array_agg) by index into different rows
E.g., input is
'{cat$ball$x... , dog$bat$y...}'::text[]
I need to split each element by '$' and the desired output is:
{cat,dog} - row 1
{ball,bat} - row 2
{x,y} - row 3
...
Sorry for not being clear the first time. I have edited my question. I tried similar options but unable to figure out how to get it with multiple text elements separated with '$' sysmbol
Exactly two parts per array element (original question)
Use unnest(), split_part() and array_agg():
SELECT array_agg(split_part(t, '$', 1)) AS col1
, array_agg(split_part(t, '$', 2)) AS col2
FROM unnest('{cat$ball, dog$bat}'::text[]) t;
Related:
Split comma separated column data into additional columns
General solution (updated question)
For any number of arrays with any number of elements containing any number of parts.
Demo for a table tbl:
CREATE TABLE tbl (tbl_id int PRIMARY KEY, arr text[]);
INSERT INTO tbl VALUES
(1, '{cat1$ball1, dog2$bat2}') -- 2 parts per array element, 2 elements
, (2, '{cat$ball$x, dog$bat$y}') -- 3 parts ...
, (3, '{a1$b1$c1$d1, a2$b2$c2$d2, a3$b3$c3$d3}'); -- 4 parts, 3 elements
Query:
SELECT tbl_id, idx, array_agg(elem ORDER BY ord) AS pivoted_array
FROM tbl t
, unnest(t.arr) WITH ORDINALITY a1(string, ord)
, unnest(string_to_array(a1.string, '$')) WITH ORDINALITY a2(elem, idx)
GROUP BY tbl_id, idx
ORDER BY tbl_id, idx;
We are looking at two (nested) LATERAL joins here. LATERAL requires Postgres 9.3. Details:
What is the difference between LATERAL and a subquery in PostgreSQL?
WITH ORDINALITY for the the first unnest() is up for debate. A simpler query normally works, too. It's just not guaranteed to work according to SQL standards:
SELECT tbl_id, idx, array_agg(elem) AS pivoted_array
FROM tbl t
, unnest(t.arr) string
, unnest(string_to_array(string, '$')) WITH ORDINALITY a2(elem, idx)
GROUP BY tbl_id, idx
ORDER BY tbl_id, idx;
Details:
PostgreSQL unnest() with element number
WITH ORDINALITY requires Postgres 9.4 or later. The same back-patched to Postgres 9.3:
SELECT tbl_id, idx, array_agg(arr2[idx]) AS pivoted_array
FROM tbl t
, LATERAL (
SELECT string_to_array(string, '$') AS arr2 -- convert string to array
FROM unnest(t.arr) string -- unnest org. array
) x
, generate_subscripts(arr2, 1) AS idx -- unnest 2nd array with ord. numbers
GROUP BY tbl_id, idx
ORDER BY tbl_id, idx;
Each query returns:
tbl_id | idx | pivoted_array
--------+-----+---------------
1 | 1 | {cat,dog}
1 | 2 | {bat,ball}
1 | 3 | {y,x}
2 | 1 | {cat2,dog2}
2 | 2 | {ball2,bat2}
3 | 1 | {a3,a1,a2}
3 | 2 | {b1,b2,b3}
3 | 3 | {c2,c1,c3}
3 | 4 | {d2,d3,d1}
SQL Fiddle (still stuck on pg 9.3).
The only requirement for these queries is that the number of parts in elements of the same array is constant. We could even make it work for a varying number of parts using crosstab() with two parameters to fill in NULL values for missing parts, but that's beyond the scope of this question:
PostgreSQL Crosstab Query
A bit messy but you could unnest the array, use regex to separate the text and then aggregate back up again:
with a as (select unnest('{cat$ball, dog$bat}'::_text) some_text),
b as (select regexp_matches(a.some_text, '(^[a-z]*)\$([a-z]*$)') animal_object from a)
select array_agg(animal_object[1]) animal, array_agg(animal_object[2]) a_object
from b
If you're processing multiple records at once you may want to use something like a row number before the unnest so that you have a group by to aggregate back to an array in your final select statement.
I am trying to sort the numbers,
MH/122020/101
MH/122020/2
MH/122020/145
MH/122020/12
How can I sort these in an Access query?
I tried format(mid(first(P.PFAccNo),11),"0") but it didn't work.
You need to use expressions in your ORDER BY clause. For test data
ID PFAccNo
-- -------------
1 MH/122020/101
2 MH/122020/2
3 MH/122020/145
4 MH/122020/12
5 MH/122021/1
the query
SELECT PFAccNo, ID
FROM P
ORDER BY
Left(PFAccNo,9),
Val(Mid(PFAccNo,11))
returns
PFAccNo ID
------------- --
MH/122020/2 2
MH/122020/12 4
MH/122020/101 1
MH/122020/145 3
MH/122021/1 5
you have to convert your substring beginning with pos 11 to a number, and the number can be sorted.
How about this ?
SELECT
tmpTbl.yourFieldName
FROM
tmpTbl
ORDER BY
CLng(Mid([tmpTbl].[yourFieldname],InStrRev([tmpTbl].[yourFieldname],"/")+1));
Given the following data in my test_table, column DATETIMESTAMP:
XXX123
YYY000
XXX-1234
my Statement:
SELECT CInt(Mid(datetimestamp,4)) AS Ausdr1
FROM test_data
ORDER BY 1;
sorts my data. please hange 4 to 11 and it will work for you
I try to call to my DB and where is only one table:
id | value
----------
1 | 1|2|4
2 | 11|23
3 | 1|4|3|11
4 | 2|4|11
5 | 5|6|11
6 | 12|15|16
7 | 3|1|4
8 | 5|2|1
QUERY was : SELECT * FROM table_name WHERE value LIKE '%1%'
I want to select only rows with value 1 but I get rows with 11 value to.
How to show in SQL differences?
If you have to stick with this broken design, it's probably better to use Postgres' ability to parse a string into an array.
This is more robust than using a like condition:
select *
from the_table
where string_to_array(value,'|') #> array['1']
or maybe a bit easier to read
select *
from the_table
where '1' = any (string_to_array(value,'|'))
using the overlaps operator #> you can also search for more than one value at a time:
select *
from the_table
where string_to_array(value,'|') #> array['1','2']
will return all rows where value contains 1 and 2
SQLFiddle example: http://sqlfiddle.com/#!15/8793d/2
I strongly recommend that you should normalize your schema to every column store only atomic values.
Without it, you are forced to do some nasty trick, f.ex. with arrays:
select * from t
where '1' = any (string_to_array(value, '|'))
or, with pattern matching:
select * from t
where '1' similar to value
SQLFiddle
I'm quite new into SQL and I'd like to make a SELECT statement to retrieve only the first row of a set base on a column value. I'll try to make it clearer with a table example.
Here is my table data :
chip_id | sample_id
-------------------
1 | 45
1 | 55
1 | 5986
2 | 453
2 | 12
3 | 4567
3 | 9
I'd like to have a SELECT statement that fetch the first line with chip_id=1,2,3
Like this :
chip_id | sample_id
-------------------
1 | 45 or 55 or whatever
2 | 12 or 453 ...
3 | 9 or ...
How can I do this?
Thanks
i'd probably:
set a variable =0
order your table by chip_id
read the table in row by row
if table[row]>variable, store the table[row] in a result array,increment variable
loop till done
return your result array
though depending on your DB,query and versions you'll probably get unpredictable/unreliable returns.
You can get one value using row_number():
select chip_id, sample_id
from (select chip_id, sample_id,
row_number() over (partition by chip_id order by rand()) as seqnum
) t
where seqnum = 1
This returns a random value. In SQL, tables are inherently unordered, so there is no concept of "first". You need an auto incrementing id or creation date or some way of defining "first" to get the "first".
If you have such a column, then replace rand() with the column.
Provided I understood your output, if you are using PostGreSQL 9, you can use this:
SELECT chip_id ,
string_agg(sample_id, ' or ')
FROM your_table
GROUP BY chip_id
You need to group your data with a GROUP BY query.
When you group, generally you want the max, the min, or some other values to represent your group. You can do sums, count, all kind of group operations.
For your example, you don't seem to want a specific group operation, so the query could be as simple as this one :
SELECT chip_id, MAX(sample_id)
FROM table
GROUP BY chip_id
This way you are retrieving the maximum sample_id for each of the chip_id.