Postgres JSON selecting a row by list - sql

I have a table that has a JSON list as one of its values. The column name is list_order and the value would be something like: [1,2,3].
I am having trouble doing a WHERE comparison to select by list_order. In pure SQL, it would be: SELECT * FROM table_name list_order=[1,2,3];
The closest example I found was this: How do I query using fields inside the new PostgreSQL JSON datatype?. However, this grabs the value of a key in the JSON where the JSON is a dictionary and not a list. I've tried modifying it to suit my need but it did not work.
Any suggestions? Is that even possible? Why is not documented? Thanks!

I found the answer. I need to compare it as text:
"SELECT * FROM table WHERE list_order::text='[1,2,3]';

Related

How to use json array in WHERE IN clause in Postgres

I have a Postgres query like this
SELECT * FROM my_table WHERE status IN (2,1);
This is part of a big query, but I am facing an issue with the WHERE IN part here. I am using this query inside a function and the input parameters are in JSON format. Now the status values I am getting in in the form of a JSON array and it will be like status=[2,1]. I need to use this array in the WHERE clause in the query and not sure how to do that. Currently, I am using like
SELECT * FROM my_table WHERE status IN (array([2,1]));
But this is giving me an error. The status column is of smallint data type. I know this is simple, but I am very much new to Postgres and could not figure out any method to use the JSON array in WHERE IN clause. Any help will be appreciated.

Extract key value pair from json column in redshift

I have a table mytable that stores columns in the form of JSON strings, which contain multiple key-value pairs. Now, I want to extract only a particular value corresponding to one key.
The column that stores these strings is of varchar datatype, and is created as:
insert into mytable(empid, json_column) values (1,'{"FIRST_NAME":"TOM","LAST_NAME" :"JENKINS", "DATE_OF_JOINING" :"2021-06-10", "SALARY" :"1000" }').
As you can see, json_column is created by inserting only a string. Now, I want to do something like:
select json_column.FIRST_NAME from mytable
I just want to extract the value corresponding to key FIRST_NAME.
Though my actual table is far more complex than this example, and I cannot convert these JSON keys into different columns themselves. But, this example clearly illustrates my issue.
This needs to be done over Redshift, please help me out with any valuable suggestions.
using function json_extract_path_text of Redshift can solve this problem easily, as follows:
select json_extract_path_text(json_column, 'FIRST_NAME') from mytable;

T-SQL: Exclude Columns from a SELECT statement based on string

My overall goal is to create new tables by selecting columns in existing tables with certain patterns/tags in their column names. This is in SQL Server.
For example, I have a common need to get all contact information out of a company table, and into its own contact table.
I haven't been able to find a programmatic approach in SQL to express excluding columns from a SELECT statement based on string.
When looking to options like the COL_NAME function, those require an ID arg, which kills that option for me.
Wishing there was something built in that could work like the following:
SELECT *
FROM TABLE
WHERE COL_NAME() LIKE 'FLAG%'
Any ideas? Open to anything! Thanks!!
One trick could be to firstly using the following code to get the column names you desire:
select * from information_schema.columns
where table_name='tbl' and column_name like 'FLAG%'
Then concatenate them into a comma-delimited string and finally create a dynamic sql query using the created string as the column names.

check if a jsonb field contains an array

I have a jsonb field in a PostgreSQL table which was supposed to contain a dictionary like data aka {} but few of its entries got an array due to source data issues.
I want to weed out those entries. One of the ways is to perform following query -
select json_field from data_table where cast(json_field as text) like '[%]'
But this requires converting each jsonb field into text. With data_table having order of 200 million entries, this looks like bit of an overkill.
I investigated pg_typeof but it returns jsonb which doesn't help differentiate between a dictionary and an array.
Is there a more efficient way to achieve the above?
How about using the json_typeof function?
select json_field from data_table where json_typeof(json_field) = 'array'

Select all existing json fields from a postgres table

In my table mytable I have a json field called data and I inserted json with a lot of keys & values.
I know that it's possible to select individual fields like so:
SELECT data->'mykey' as mykey from mytable
But how can I get an overview of all of the json keys on a certain depth? I would have expected something like
SELECT data->* from mytable
but that doesn't work. Is there something similar?
You can use the json_object_keys() function to get all the top-level keys of a json value:
SELECT keys.*
FROM mytable, json_object_keys(mytable.data) AS keys (mykey);
If you want to search at a deeper level, then first extract that deeper level from the json value using the #> operator:
SELECT keys.*
FROM mytable, json_object_keys(mytable.data #> '{level1, level2}') AS keys (mykey);
Note that the function returns a set of text, so you should invoke the function as a row source.
If you are using the jsonb data type, then use the jsonb_object_keys() function.