I have a table, for example writing in psql. This table has a column json (text type). It contains text like this:
writing:[{"variableName":variableValue ...}]
variableValues are different types, including text ,bigint and date.
I want to get all rows from writing where variableName has the value 2.
I'm using this select:
select * from writing where json::json->>'variableName' = '2' limit 5
This select returns me 0 rows, but there are a lot of data in this table, which should pass this condition. Any idea what is wrong, or maybe you have better statement.
Im using limit 5 because need just 5 rows.
You'll have to prepend a { and append a } to make it a JSON like you intend. As it is, it will become a single JSON string.
Then you'll have to access the attribute as
('{' || json || '}')::json->'writing'->1->>'variableName'
Related
I'm using sqlite to store JSON data that I have no control over. I have a logs table that looks like this.
id
value
s8i13s85e8f34zm8vikkcv5n
{"key":["a","b"]}
m2abxfn2n9pkyc9kjmko5462
{"key": "sometext"}
Then I use the following query to get the rows where value.key contains a:
SELECT * FROM logs WHERE EXISTS (SELECT * FROM json_each(json_extract(logs.value,'$.key')) WHERE json_each.value = 'a')
The query works fine if key is an array or if it doesn't exist. But it fails if is a string (like the second row of the table)
The error I get is:
SQL error or missing database (malformed JSON)
And it is because json_each throws if the parameter is an string.
Because of the requirements I can't control the user data or the queries.
Ideally I would like to figure out a query that either doesn't fail or that detects that the value is a string instead of an array and uses LIKE to see if the string contains 'a'.
Any help would be appreciated. Happy holidays :)
Use a CASE expression in the WHERE clause which checks if the value is an array or not:
SELECT *
FROM logs
WHERE CASE
WHEN value LIKE '{"key":[%]}' THEN
EXISTS (
SELECT *
FROM json_each(json_extract(logs.value,'$.key'))
WHERE json_each.value = 'a'
)
ELSE json_extract(value,'$.key') = 'a'
END;
See the demo.
I've a table with JSON column and want to select rows where JSON key 'k' has value 'value'. Json may consist of several pairs of [K,V].
[
{"k":"esr:code","v":"800539"},
{"k":"lit","v":"yes"},
{"k":"name","v":"5 ΠΊΠΌ"},
{"k":"railway","v":"halt"},
{"k":"uic_ref","v":"2040757"}
]
I tried to use the next query, but it's wrong.
SELECT *
FROM public.node
where ((node.tags)::json->>'k' like 'name')
How I can fix it, if it's possible?)
Where node - table name, tags - json column.
You can use the JSONB contains operator #>
SELECT *
FROM public.node
where node.tags #> '[{"k","name"}]';
This will do an exact match against name. Your usage of like might indicate you are looking for a partial match - however as your like condition doesn't use a wildcard it's the same as =.
This assumes that tags is defined as jsonb (which it should be). If it's not you need to cast it: node.tags::jsonb
Here is my problem statement:
I have single column table having the data like as :
ROW-1>> 7302-2210177000-XXXX-XXXXXX-XXX-XXXXXXXXXX-XXXXXX-XXXXXX-U-XXXXXXXXX-XXXXXX
ROW-2>> 0311-1130101-XXXX-000000-XXX-XXXXXXXXXX-XXXXXX-XXXXXX-X-XXXXXXXXX-WIPXXX
Here i want to separate these values from '-' and load into a new table. There are 11 segments in this string separated by '-', therefore, 11 columns. The problem is:
A. The length of these values are changing, however, i have to keep it as the length of these values in the standard format or the length which it has
e.g 7302- (should have four values, if the value less then that then keep that value eg. 73 then it should populate 73.
Therefore, i have to separate as well as mentation the integrity. The code which i am writing is :
select
SUBSTR(PROFILE_ID,1,(case when length(instr(PROFILE_ID,'-')<>4) THEN (instr(PROFILE_ID,'-') else SUBSTR(PROFILE_ID,1,4) end)
)AS [RQUIRED_COLUMN_NAME]
from [TABLE_NAME];
getting right parenthesis error
Please help.
I used the regex_substr SQL function to solve the above issue. Here below is an example:
select regex_substr('7302-2210177000-XXXX-XXXXXX-XXX-XXXXXXXXXX-XXXXXX-XXXXXX-U-XXXXXXXXX-XXXXXX ROW-2>> 0311-1130101-XXXX-000000-XXX-XXXXXXXXXX-XXXXXX-XXXXXX-X-XXXXXXXXX-WIPXXX',[^-]+,1,1);
Output is: 7302 --which is the 1st segment of the string
Similarly, the send string segment which is separated by "-" in the string can be obtained by just replacing the 1 with 2 in the above query at the end.
Example : select regex_substr('7302-2210177000-XXXX-XXXXXX-XXX-XXXXXXXXXX-XXXXXX-XXXXXX-U-XXXXXXXXX-XXXXXX ROW-2>> 0311-1130101-XXXX-000000-XXX-XXXXXXXXXX-XXXXXX-XXXXXX-X-XXXXXXXXX-WIPXXX',[^-]+,1,2);
output: 2210177000 which is the 2nd segment of the string
I am working on data in postgresql as in the following mytable with the fields id (type int) and val (type json):
id
val
1
"null"
2
"0"
3
"2"
The values in the json column val are simple JSON values, i.e. just strings with surrounding quotes and have no key.
I have looked at the SO post How to convert postgres json to integer and attempted something like the solution presented there
SELECT (mytable.val->>'key')::int FROM mytable;
but in my case, I do not have a key to address the field and leaving it empty does not work:
SELECT (mytable.val->>'')::int as val_int FROM mytable;
This returns NULL for all rows.
The best I have come up with is the following (casting to varchar first, trimming the quotes, filtering out the string "null" and then casting to int):
SELECT id, nullif(trim('"' from mytable.val::varchar), 'null')::int as val_int FROM mytable;
which works, but surely cannot be the best way to do it, right?
Here is a db<>fiddle with the example table and the statements above.
Found the way to do it:
You can access the content via the keypath (see e.g. this PostgreSQL JSON cheatsheet):
Using the # operator, you can access the json fields through the keypath. Specifying an empty keypath like this {} allows you to get your content without a key.
Using double angle brackets >> in the accessor will return the content without the quotes, so there is no need for the trim() function.
Overall, the statement
select id
, nullif(val#>>'{}', 'null')::int as val_int
from mytable
;
will return the contents of the former json column as int, respectvely NULL (in postgresql >= 9.4):
id
val_int
1
NULL
2
0
3
2
See updated db<>fiddle here.
--
Note: As pointed out by #Mike in his comment above, if the column format is jsonb, you can also use val->>0 to dereference scalars. However, if the format is json, the ->> operator will yield null as result. See this db<>fiddle.
I have a table 'article' with column 'content' .I want to query Postgresql in order to search for a string contained in variable 'temp'.This query works fine-
pool.query("select * from article where upper(content) like upper('%some_value%')");
But when I use placeholder $1 and [temp] in place of some_value , I get the above error -
pool.query("select * from article where upper(content) LIKE upper('%$1%')",[temp] );
Note - Here $1 is a placeholder and should be replaced by the value in [temp] , but it treats '%$1%' as a string , I guess. Without the quotes ' ' , the LIKE operator doesn't work. I have also tried the query -
pool.query("select * from article where upper(content) LIKE upper(concat('%',$1,'%'))",[temp] );
to ensure $1 is not treated as a string literal but it gives the error -
error: could not determine data type of parameter $1
pool.query(
"select * from article where upper(content) LIKE upper('%' || $1 || '%')",
[temp]
).then( res => {console.log(res)}, err => {console.error(err)})
This works for me. I just looked at this Postgres doc page to try and understand what concat was doing to the parameter notation. Can't say that I understand the difference between using || operators and using concat string function at this time.
The easiest way I found to do this is like the following:
// You can remove [0] from character[0] if you want the complete value of character.
database.query(`
SELECT * FROM users
WHERE LOWER(users.name) LIKE LOWER($1)
ORDER BY users.id ASC`,
["%" + character[0] + "%"]
);
// [%${character}%] string literal alternative to the last line in the function call.
There are several things going on here, so let me break each line it down.
SELECT * FROM users
This is selecting all the columns associated with table users
WHERE LOWER(users.name) LIKE $1
This is filtering out all the results from the first line so that where the name(lowercased) column of the users table is like the parameter $1.
ORDER BY users.id ASC
This is optional, but I like to include it because I want the data returned to me to be in ascending order (that is from 0 to infinity, or starting low and going high) based on the users.id or the id column of the users table. A popular alternative for client-side data presentation is users.created_at DESC which shows the latest user (or more than likely an article/post/comment) by its creation date in reverse order so you get the newest content at the top of the array to loop through and display on the client-side.
["%" + character + "%"]
This part is the second argument in the .query method call from the database object (or client if you kept with that name, you can name it what you want, and database to me makes for more a sensical read than "client", but that is just my personal opinion, and it's highly possible that "client" may be the more technically correct term to use).
The second argument needs to be an array of values. It takes the place of the parameters inserted in the query string, for example, $1 or ? are examples of parameter placeholders which are filled in with a value in the 2nd argument's array of values. In this case, I used JavaScript's built-in string concatenation to provide a "includes" like pattern, or in plain-broken English, "find me columns that contain a 'this' value" where name(lowercased) is the column and character is the parameter variable value. I am pulling in the parameter value for the character variable from req.params (the URL, so http://localhost:3000/users/startsWith/t), so combining that with % on both ends of the parameter, it returns me all the values that contain the letter t since is the first (and only) character here in the URL.
I know this is a VERY late response, but I wanted to respond with a more thorough answer in case anyone else needed it broken down further.
In my case :
My variable was $1, instead of ?1 ...
I was customizing my query with #Query