SQL - ORDER BY Timestamp Array Data does not work - sql

I have a table with schema as below
fields. TIME_256Hz TIMESTAMP REPEATED
fields. SAC_ACCELX FLOAT REPEATED
fields. SAC_ACCELY FLOAT REPEATED
fields. SAC_ACCELZ FLOAT REPEATED
fields. SAC_GYROX FLOAT REPEATED
fields. SAC_GYROY FLOAT REPEATED
fields. SAC_GYROZ FLOAT REPEATED
fields. SAC_PRESSURE FLOAT REPEATED
fields. TARGET STRING REPEATED
The table looks like
I try to sort the data with timestamp using this command
select * from liftpdm.liftpdm_2.acc8
order by fields.TIME_256Hz desc
BUT
I keep getting the following error:
ORDER BY does not support expressions of type ARRAY<TIMESTAMP> at [2:10]
I tried to change the Datatype to Datetime, string but same error.
Please suggest how to sort out this data
Thanks

Reproducible error:
WITH data AS (
SELECT [1,2] arr UNION ALL
SELECT [2,1] UNION ALL
SELECT [4,5,6]
)
SELECT * FROM data
ORDER BY arr
# ORDER BY does not support expressions of type ARRAY<INT64> at [8:10]
First you need to decide what do you really want to ORDER BY, as ordering by an array doesn't make much sense.
Some alternatives:
SELECT * FROM data
ORDER BY arr[SAFE_OFFSET(0)]
SELECT * FROM data
ORDER BY arr[SAFE_OFFSET(1)]
SELECT * FROM data
ORDER BY ARRAY_LENGTH(arr) DESC

Related

How is a binary/bytes column sorted in sql?

The following query sorts a binary column in BigQuery:
with tbl as (
select B'123' as col union all select B'234'
) select * from tbl order by col;
----------------------------------
Row col f0_
1 MTIz false
2 MjM0 false
Is there a convention for how a Binary column is sorted? The above example is tested against BigQuery.
The BINARY type is similar to CHAR, except that it stores binary strings rather than nonbinary strings. That is, it stores byte strings rather than character strings.
This means they have the binary character set and collation, and comparison and sorting are based on the numeric values of the bytes in the values.

Working of "between" operator for Varchar and Char datatypes in postgresql

I would like to use "between" operator on "varchar" and "char(5)" columns in a table. However, I don't see any effect of "between" operator in the results.
Please note that "code" is the column name
I have a table with a column of type VARCHAR and I apply cast operation to convert it into "CHAR(5)" type. When I apply between operator in the query on either of the columns, it doesn't produce a filtered result.
For VARCHAR COLUMN
select code from diagnoses where code between '4254' and
'5855'
For CHAR(5) COLUMN
select cast(code as char(5)) as code
from diagnoses where code between '4254' and '5855'
I expect the output to have only records within this range but instead I get records which don't meet this criteria as well. Please find a sample screenshot below
demo:db<>fiddle
The BETWEEN operator for character based types works for alphabetical order. That means it uses the order
4254
45829
486
...
58381
5845
5855
So the output is completely correct.
If you want a numeric order you have to cast it into an appropriate data type:
select
cast(code as char(5)) as code
from
diagnoses
where
cast(code as int) between 4254 and 5855 -- cast here
Result:
4254
5119
5589
5845
5855
I would write the logic like this:
where code::int between 4254 and 5855
Or, if you want to use an index:
where code between '4254' and '5855' and
length(code) = 4

How to transfer a column in an array using PostgreSQL, when the columns data type is a composite type?

I'm using PostgreSQL 9.4 and I'm currently trying to transfer a columns values in an array. For "normal" (not user defined) data types I get it to work.
To explain my problem in detail, I made up a minimal example.
Let's assume we define a composite type "compo" and create a table "test_rel" and insert some values. Looks like this and works for me:
CREATE TYPE compo AS(a int, b int);
CREATE TABLE test_rel(t1 compo[],t2 int);
INSERT INTO test_rel VALUES('{"(1,2)"}',3);
INSERT INTO test_rel VALUES('{"(4,5)","(6,7)"}',3);
Next, we try to get an array with column t2's values. The following also works:
SELECT array(SELECT t2 FROM test_rel WHERE t2='3');
Now, we try to do the same stuff with column t1 (the column with the composite type). My problem is now, that the following does'nt work:
SELECT array(SELECT t1 FROM test_rel WHERE t2='3');
ERROR: could not find array type for data type compo[]
Could someone please give me a hint, why the same statement does'nt work with the composite type? I'm not only new to stackoverflow, but also to PostgreSQL and plpgsql. So, please tell me, when I'm doing something the wrong way.
There were some discussion about this in the PostgreSQL mailing list.
Long story short, both
select array(select array_type from ...)
select array_agg(array_type) from ...
represents a concept of array of arrays, which PostgreSQL doesn't support. PostgreSQL supports multidimensional arrays, but they have to be rectangular. F.ex. ARRAY[[0,1],[2,3]] is valid, but ARRAY[[0],[1,2]] is not.
There were some improvement with both the array constructor & the array_agg() function in 9.5.
Now, they explicitly states, that they will accumulate array arguments as a multidimensional array, but only if all of its parts have equal dimensions.
array() constructor: If the subquery's output column is of an array type, the result will be an array of the same type but one higher dimension; in this case all the subquery rows must yield arrays of identical dimensionality, else the result would not be rectangular.
array_agg(any array type): input arrays concatenated into array of one higher dimension (inputs must all have same dimensionality, and cannot be empty or NULL)
For 9.4, you could wrap the array into a row: this way, you could create something, which is almost an array of arrays:
SELECT array(SELECT ROW(t1) FROM test_rel WHERE t2='3');
SELECT array_agg(ROW(t1)) FROM test_rel WHERE t2='3';
Or, you could use a recursive CTE (and an array concatenation) to workaround the problem, like:
with recursive inp(arr) as (
values (array[0,1]), (array[1,2]), (array[2,3])
),
idx(arr, idx) as (
select arr, row_number() over ()
from inp
),
agg(arr, idx) as (
select array[[0, 0]] || arr, idx
from idx
where idx = 1
union all
select agg.arr || idx.arr, idx.idx
from agg
join idx on idx.idx = agg.idx + 1
)
select arr[array_lower(arr, 1) + 1 : array_upper(arr, 1)]
from agg
order by idx desc
limit 1;
But of course this solution is highly dependent of your data ('s dimensions).

SQL-Select with decimal field not working

I have a table with a decimal field in Postgres
The value of the field is 39.95
I try to make a SQL-Query to get all records with value 39.95
This is my code:
SELECT CR_Sale.id , CR_Sale.sale_date, ROUND(CR_Sale.sale_bruto_total, 2)
AS sale_bruto_total, CR_Sale.sale_ticket
FROM crm_CR_Sale CR_Sale
WHERE sale_bruto_total = 39.95
ORDER BY sale_timestamp DESC;
When I execute the query no results are found.
If I change my code to
WHERE sale_bruto_total <= 39.95
I get a result and can see the records with value 39.95
What am I doing wrong?
Thanks
Try as follow:-
SELECT CR_Sale.id , CR_Sale.sale_date, ROUND(CR_Sale.sale_bruto_total, 2)
AS sale_bruto_total, CR_Sale.sale_ticket
FROM crm_CR_Sale CR_Sale
WHERE sale_bruto_total = CAST(39.95 AS dec(5,2))
ORDER BY sale_timestamp DESC;
Note:- 'sale_bruto_total' column should be decimal type
The number 39.95 is not a decimal value but a float and then you can't make equality comparisons due to the internal precision of floats (this being the reason why there is the decimal data type in the first place) so you have to add an explicit cast:
sale_bruto_total = 39.95::decimal(5,2)
Also, if the value of sale_bruto_total is exactly 39.95 because the field is defined as a decimal with two decimals after the dot, then you do not need the round() function in the select list. On the other hand, if there are more decimals then you need to use the round() function in the filter too. So either:
SELECT id, sale_date, round(sale_bruto_total, 2) AS sale_bruto_total, sale_ticket
FROM crm_CR_Sale
WHERE round(sale_bruto_total, 2) = 39.95::decimal(5,2)
ORDER BY sale_timestamp DESC;
or
SELECT id, sale_date, sale_bruto_total, sale_ticket
FROM crm_CR_Sale
WHERE sale_bruto_total = 39.95::decimal(5,2)
ORDER BY sale_timestamp DESC;

How to use function in subquery

I have one table named MemberCheque where the fields are:
MemberName, Amount
I want to to show the name and the respective amount in numbers and as well as in words after separating the integer amount from the decimal. So my query is like:
SELECT MemName, Amount, (SELECT (Amount)%1*100 AS lefAmn, dbo.fnNumberToWords(lefAmn)+
'Hundred ', (Amount) - (Amount)%1 AS righAmnt, dbo.fnNumberToWords (righAmnt)+' Cents'
from MemberCheque) AS AmountInWords FROM MemberCheque
but my store procedure can take only integer value to change into words. So, I am doing separating the Amount into two parts before and after decimal but when I am trying to run this query it gives me error that lefAmn and righAmnt is not recognised. Because I am trying to send the parameter from the same query.
The first problem is that you have a subquery that is returning more than one value, and that is not allowed for a subquery in the select clause.
That answer to your specific question is to use cast() (or convert()) to make the numbers integers:
select leftAmt, rightAmt,
(dbo.fnNumberToWords(cast(leftAmt as int))+'Hundred ' +
dbo.fnNumberToWords(cast(rightAmt as int))+' Cents'
) as AmountInWords
from (SELECT (Amount%1)*100 AS leftAmt,
(Amount) - (Amount)%1 AS rightAmt
from MemberCheque
) mc
If you can't alter your function, then CAST the left/right values as INT:
CAST((Amount)%1*100 AS INT) AS lefAmn
CAST((Amount) - (Amount)%1 AS INT) AS righAmnt
You can't pass the alias created in the same statement as your function parameter, you need:
dbo.fnNumberToWords (CAST((Amount)%1*100 AS INT))
dbo.fnNumberToWords (CAST((Amount) - (Amount)%1 AS INT))