I want to get a tsrange column returned as json but do not understand how get it to work in my query:
reserved is of type TSRANGE.
c.id and a.reservationid are of type TEXT.
-- query one
SELECT DISTINCT c.*, to_json(a.reserved) FROM complete_reservations c
JOIN availability a ON (c.id = a.reservationid)
throws
ERROR: could not identify an equality operator for type json
LINE 1: SELECT DISTINCT c.*, to_json(a.reserved) FROM complete_reser...
^
SQL state: 42883
Character: 22
it works if i try it like
-- query two
SELECT to_json('[2011-01-01,2011-03-01)'::tsrange);
Result:
"[\"2011-01-01 00:00:00\",\"2011-03-01 00:00:00\")"
and I do not understand the difference between both scenarios.
How do I get query one to behave like query two?
as pointed out in this comment by Edouard, there seems to be no JSON representation of a tsrange. Therefore I framed my question wrong.
In my concrete case it was sufficent to turn the tsrange to an array of the upper() and lower() values of the TSRANGE and cast those values as strings. this way i can use the output as is and let my downstream tools handle them as json.
SELECT DISTINCT c.*, ARRAY[to_char(lower(a.reserved),'YYYY-MM-DD HH:MI:SS'),to_char(upper(a.reserved),'YYYY-MM-DD HH:MI:SS')] reserved FROM complete_reservations_with_itemidArray c
JOIN availability a ON (c.id = a.reservationid)
which returns a value like this in the reserved column: {"2023-02-04 04:57:00","2023-02-05 04:57:00"} which can be parsed as json if needed.
I post this for reference. I am not sure if it exactly answers my question as it was framed.
Related
So I'm attempting to do a join that requires both CONCAT and SUBSTRING.
Table one has a column with a date and location e.g. '02:00 IND'
Table two has a column with date/time e.g. '2020-10-10 02:00:00.000000' and another column with location e.g. 'IND'.
This is the statement that I'm trying but it isn't working:
SELECT *
FROM FIRST_TABLE
INNER JOIN SECOND_TABLE on FIRST_TABLE.TIME_LOCATION =
CONCAT(SUBSTRING(SECOND_TABLE.TIME,12,5) , SECOND_TABLE.LOCATION);
I am receiving the below error:
[SQL0171] Argument 1 of function
SUBSTRING not valid. Cause . . . . . : The data type, length, or value
of argument 1 of function SUBSTRING specified is not valid. Recovery .
. . : Refer to the DB2 for IBM i SQL Reference topic collection in the
Database category in the IBM i Information Center for more information
on scalar functions. Correct the arguments specified for the function.
Try the request again
There will be no space when you concat the substrings.
It will look like this:
02:00IND
I am guessing it is just returning a blank result as you have not mention it was returning an error :)
I got it to work if anyone ever has a similar issue. I am a beginner in SQL so I don't know if this will be helpful or not, I'm sure there is a better way of doing it.
Anyway, my issue was that the data types were incompatible. I cast the timestamp as time which left me with a 'HH.MM.SS' format. After that I had to cast it into a NVARCHAR to make a substring that excluded the seconds. And THEN I had to replace the '.' with a ':' to make the values match. It was a lot of work but I figured it out!
SELECT *
FROM FIRST_TABLE
INNER JOIN SECOND_TABLE on FIRST_TABLE.TIME_LOCATION =
CONCAT(CONCAT(REPLACE(SUBSTRING(CAST(TIME(SECOND_TABLE.TIME)AS NVARCHAR(8)), 1, 5),
'.', ':'), ' '), DSP134.LSDTID);
How do I get the column "count(division)" instead of getting the actual number of counts?
select * from num_taught;
gets me this
select count(division) from num_taught;
gets me this, but I actually want the third column "count(division)" from the previous image
I want to know this because I'm doing this right now:
sql> select * from num_taught as a, num_taught as b
...> where a.count(division) = b.count(division);
Error: near "(": syntax error
but as you can see, there's a syntax error and I think it's because the code is not referencing the "count(division)" columns but actually finding the count instead.
My end goal is to output the "Titles" that have the same "Division" and have the same count(division).
So for example, the end table would have the rows "Chief Accountant", "Programmer Trainee", "Scrivener", "Technician", "Wizard". Since these are the rows that have a match in division and count(division)
Thanks!
What does DESC num_taught return? I am curious how the third column is populated - is it some kind of pseudo-column? You may want try wrapping the column name with [], see: How to deal with SQL column names that look like SQL keywords?
i.e. try:
select [count(division)] from num_taught;
You need to escape your column name using quotes (in case it's Sqlite like you mentioned in the comments).
select "count(division)" from num_taught;
or:
select * from num_taught as a, num_taught as b
where a."count(division)" = b."count(division)";
If you don't you are using the count-function provided by your Database-system.
It's very unusual to name a column like this, it might be either a trap by your tutor or an error while initializing the table in your case.
I think you just want a count(distinct):
select count(distinct division)
from num_taught;
I want to get the average of the data in one of the table but I get "Arithmetic overflow error converting expression to data type float." exception. My table looks like,
I have to get the average of floating points in the table.
I want to exclude '-1.79769313486232E+308' from the table. How to do that?
Query that I have,
SELECT PS.Name, Z.[Round]
,AVG(Z.[EstimatedValue])
,AVG(Z.[InformationGain])
FROM [dbo].[IntermediateScores] Z
INNER JOIN [dbo].[PScale] PS
ON Z.[ScaleId] = PS.Id
GROUP BY PS.Name, Z.[Round]
ORDER BY PS.Name,Z.[Round]
Assuming that you are running SQL Server (which the syntax of your query and the error message that you showed tend to indicate), you could use TRY_CAST() to handle values that do not fit in the FLOAT datatype. When the conversion fails, TRY_CAST() returns NULL, which aggregate function AVG() ignores.
SELECT
PS.Name,
Z.[Round],
AVG(TRY_CAST(Z.[EstimatedValue] AS FLOAT)),
AVG(TRY_CAST(Z.[InformationGain] AS FLOAT))
FROM [dbo].[IntermediateScores] Z
INNER JOIN [dbo].[PScale] PS ON Z.[ScaleId] = PS.Id
GROUP BY PS.Name, Z.[Round]
ORDER BY PS.Name,Z.[Round]
In general, you want to avoid relying on implicit conversion; it seems like your data is not stored in a numeric datatype, which is the root cause of your problem. Explicit conversion is a better practice, since it is easier to debug when things go wrong.
I have some data in a postgres table that is a string representation of an array of json data, like this:
[
{"UsageInfo"=>"P-1008366", "Role"=>"Abstract", "RetailPrice"=>2, "EffectivePrice"=>0},
{"Role"=>"Text", "ProjectCode"=>"", "PublicationCode"=>"", "RetailPrice"=>2},
{"Role"=>"Abstract", "RetailPrice"=>2, "EffectivePrice"=>0, "ParentItemId"=>"396487"}
]
This is is data in one cell from a single column of similar data in my database.
The datatype of this stored in the db is varchar(max).
My goal is to find the average RetailPrice of EVERY json item with "Role"=>"Abstract", including all of the json elements in the array, and all of the rows in the database.
Something like:
SELECT avg(json_extract_path_text(json_item, 'RetailPrice'))
FROM (
SELECT cast(json_items to varchar[]) as json_item
FROM my_table
WHERE json_extract_path_text(json_item, 'Role') like 'Abstract'
)
Now, obviously this particular query wouldn't work for a few reasons. Postgres doesn't let you directly convert a varchar to a varchar[]. Even after I had an array, this query would do nothing to iterate through the array. There are probably other issues with it too, but I hope it helps to clarify what it is I want to get.
Any advice on how to get the average retail price from all of these arrays of json data in the database?
It does not seem like Redshift would support the json data type per se. At least, I found nothing in the online manual.
But I found a few JSON function in the manual, which should be instrumental:
JSON_ARRAY_LENGTH
JSON_EXTRACT_ARRAY_ELEMENT_TEXT
JSON_EXTRACT_PATH_TEXT
Since generate_series() is not supported, we have to substitute for that ...
SELECT tbl_id
, round(avg((json_extract_path_text(elem, 'RetailPrice'))::numeric), 2) AS avg_retail_price
FROM (
SELECT *, json_extract_array_element_text(json_items, pos) AS elem
FROM (VALUES (0),(1),(2),(3),(4),(5)) a(pos)
CROSS JOIN tbl
) sub
WHERE json_extract_path_text(elem, 'Role') = 'Abstract'
GROUP BY 1;
I substituted with a poor man's solution: A dummy table counting from 0 to n (the VALUES expression). Make sure you count up to the maximum number of possible elements in your array. If you need this on a regular basis create an actual numbers table.
Modern Postgres has much better options, like json_array_elements() to unnest a json array. Compare to your sibling question for Postgres:
Can get an average of values in a json array using postgres?
I tested in Postgres with the related operator ->>, where it works:
SQL Fiddle.
Using Postgres 9.0, I need a way to test if a value exists in a given array. So far I came up with something like this:
select '{1,2,3}'::int[] #> (ARRAY[]::int[] || value_variable::int)
But I keep thinking there should be a simpler way to this, I just can't see it. This seems better:
select '{1,2,3}'::int[] #> ARRAY[value_variable::int]
I believe it will suffice. But if you have other ways to do it, please share!
Simpler with the ANY construct:
SELECT value_variable = ANY ('{1,2,3}'::int[])
The right operand of ANY (between parentheses) can either be a set (result of a subquery, for instance) or an array. There are several ways to use it:
SQLAlchemy: how to filter on PgArray column types?
IN vs ANY operator in PostgreSQL
Important difference: Array operators (<#, #>, && et al.) expect array types as operands and support GIN or GiST indices in the standard distribution of PostgreSQL, while the ANY construct expects an element type as left operand and can be supported with a plain B-tree index (with the indexed expression to the left of the operator, not the other way round like it seems to be in your example). Example:
Index for finding an element in a JSON array
None of this works for NULL elements. To test for NULL:
Check if NULL exists in Postgres array
Watch out for the trap I got into: When checking if certain value is not present in an array, you shouldn't do:
SELECT value_variable != ANY('{1,2,3}'::int[])
but use
SELECT value_variable != ALL('{1,2,3}'::int[])
instead.
but if you have other ways to do it please share.
You can compare two arrays. If any of the values in the left array overlap the values in the right array, then it returns true. It's kind of hackish, but it works.
SELECT '{1}' && '{1,2,3}'::int[]; -- true
SELECT '{1,4}' && '{1,2,3}'::int[]; -- true
SELECT '{4}' && '{1,2,3}'::int[]; -- false
In the first and second query, value 1 is in the right array
Notice that the second query is true, even though the value 4 is not contained in the right array
For the third query, no values in the left array (i.e., 4) are in the right array, so it returns false
unnest can be used as well.
It expands array to a set of rows and then simply checking a value exists or not is as simple as using IN or NOT IN.
e.g.
id => uuid
exception_list_ids => uuid[]
select * from table where id NOT IN (select unnest(exception_list_ids) from table2)
Hi that one works fine for me, maybe useful for someone
select * from your_table where array_column ::text ilike ANY (ARRAY['%text_to_search%'::text]);
"Any" works well. Just make sure that the any keyword is on the right side of the equal to sign i.e. is present after the equal to sign.
Below statement will throw error: ERROR: syntax error at or near "any"
select 1 where any('{hello}'::text[]) = 'hello';
Whereas below example works fine
select 1 where 'hello' = any('{hello}'::text[]);
When looking for the existence of a element in an array, proper casting is required to pass the SQL parser of postgres. Here is one example query using array contains operator in the join clause:
For simplicity I only list the relevant part:
table1 other_name text[]; -- is an array of text
The join part of SQL shown
from table1 t1 join table2 t2 on t1.other_name::text[] #> ARRAY[t2.panel::text]
The following also works
on t2.panel = ANY(t1.other_name)
I am just guessing that the extra casting is required because the parse does not have to fetch the table definition to figure the exact type of the column. Others please comment on this.