operator does not exist: text ->> unknown - sql

I am trying to take info from my json field, which is created using c#, enter image description here
SELECT "Price"->>'TotalPrice' FROM "Table"
but I have error in postgres
ERROR: operator does not exist: text ->> unknown LINE 1: SELECT
"Price"->>'Price' FROM "ReadModel"... No operator
matches the given name and argument types. You might need to add
explicit type casts.

The error message is pretty obvious: your column is defined as text not as jsonb or json. But the ->> operator only works with a jsonb or json column, so you will need to cast it:
SELECT "Price"::jsonb ->> 'TotalPrice'
FROM "ReadModel"."MyListingDto"
Unrelated to your problem, but: you should really avoid those dreaded quoted identifiers. They are much more trouble than they are worth it.

Related

Append a value to json array

I have a postgresql table profile with columns (username, friends). username is a string and friends is a json type and has the current values ["Mary", "Bob]. I want to append an item at the end of the array so it is ["Mary", "Bob", "Alice"]
I am currently have tried:
UPDATE profile SET friends = friends || '["Alice"]'::jsonb WHERE username = 'David';
This yields the error:
[ERROR] 23:56:23 error: operator does not exist: json || jsonb
I tried changing the first expression to include json instead of jsonb but I then got the error:
[ERROR] 00:06:25 error: operator does not exist: json || json
Other answers seem to suggesst the || operator is indeed a thing, e.g.:
Appending (pushing) and removing from a JSON array in PostgreSQL 9.5+
How do I append an item to the end of a json array?
The data type json is much more limited than jsonb. Use jsonb instead of json and the solution you found simply works.
While stuck with json, a possible workaround is to cast to jsonb and back:
UPDATE profile
SET friends = (friends::jsonb || '["Alice"]')::json
WHERE username = 'David';
You might use an explicit type for jsonb '["Alice"]', too, but that's optional. While the expression is unambiguous like that, the untyped string literal will be coerced to jsonb automatically. If you instead provide a typed value in place of '["Alice"]', the explicit cast may be required.
If friends can be NULL, you'll want to define what should happen then. Currently, nothing happens. Or, to be precise, an update that doesn't change the NULL value.
Then again, if you only have simple arrays of strings, consider the plain Postgres array type text[] instead.

Snowflake's `try_to_geography` raises an error instead of returning null

I am trying to run a query in Snowflake to convert a GeoJSON object into Snowflake's baked-in geospatial data types:
SELECT id, -- some other columns
TRY_TO_GEOGRAPHY(raw_row) -- raw row is my GeoJSON object
FROM ...
The Snowflake documentation for TRY_TO_GEOGRAPHY says:
Parses an input and returns a value of type GEOGRAPHY.
This function is essentially identical to TO_GEOGRAPHY except that it returns NULL when TO_GEOGRAPHY would issue an error.
However when I run the query I get the following very uninformative error (no mention of column, record or even SQL line that produced the error):
Failed to cast variant value "null" to OBJECT
I pinpointed the TRY_TO_GEOGRAPHY to be the cause since the query works if I comment that line. Shouldn't Snowflake return NULL in this case? How can I fix the problem? And if there's faulty data in my table, is there any way for me to find the rows where the function fails (it has 9 digits row count, I can't do it manually)?
Is your raw_row of VARIANT type or VARCHAR?
The Snowflake documentation for TRY_TO_GEOGRAPHY also mentions (emphasis mine)
<variant_expression>
The argument must be an OBJECT in GeoJSON format.
So if you try to parse a variant value that is not an "OBJECT in GeoJson format" it will indeed raise an error:
SELECT
TRY_TO_GEOGRAPHY('' :: VARCHAR), -- NULL
TRY_TO_GEOGRAPHY('' :: VARIANT), -- error!
TRY_TO_GEOGRAPHY('{"geometry": null, "type": "Feature"}' :: VARCHAR), -- NULL
TRY_TO_GEOGRAPHY('{"geometry": null, "type": "Feature"}' :: VARIANT); -- error!
However you can argue the documentation is not all that clear on this issue, and there are some cases where they both act the same. Surprisingly:
SELECT
-- just removing {"type": "Feature"} from the error above
TRY_TO_GEOGRAPHY('{"geometry": null}' :: VARIANT), -- NULL
-- just removing {"geometry": null} from the error above
-- a Feature without "geometry" and "properties" is ill-defined by the spec
TRY_TO_GEOGRAPHY('{"type": "Feature"}' :: VARIANT); -- NULL
So it's unclear if it's a bug in Snowflake or if it's intended behavior. Note TRY_TO_GEOGRAPHY will work as expected if the object is a valid GeoJSON format but the geography itself is invalid. For example a polygon where edges cross each other will return null for the TRY_ version but fail with an error with TO_GEOGRAPHY.
How to fix it: the simplest way is to convert the variant column to varchar, which is a bit silly but it works. You can then query the NULL values produced and check the GeoJSON values that produced an error, hoping it's not that many to manually check them.
You may need to convert the object under key geometry.
The Geojson may have other keys, such as feature and properties. Your table can be designed with these 1st level keys as columns (in my case I just drop feature). In case the geometry is null, TRY_TO_GEOGRAPHY will return NULL. In case it is one of the others, there is no need to do a conversion (just use type OBJECT or VARIANT).
The problem seems to be getting a GEOGRAPHY data from a non-viable object as source (e.g., having the object under geometry as null).

ERROR: function regexp_matches(jsonb, unknown) does not exist in Tableau but works elsewhere

I have a column called "Bakery Activity" whose values are all JSONs that look like this:
{"flavors": [
{"d4js95-1cc5-4asn-asb48-1a781aa83": "chocolate"},
{"dc45n-jnsa9i-83ysg-81d4d7fae": "peanutButter"}],
"degreesToCook": 375,
"ingredients": {
"d4js95-1cc5-4asn-asb48-1a781aa83": [
"1nemw49-b9s88e-4750-bty0-bei8smr1eb",
"98h9nd8-3mo3-baef-2fe682n48d29"]
},
"numOfPiesBaked": 1,
"numberOfSlicesCreated": 6
}
I'm trying to extract the number of pies baked with a regex function in Tableau. Specifically, this one:
REGEXP_EXTRACT([Bakery Activity], '"numOfPiesBaked":"?([^\n,}]*)')
However, when I try to throw this calculated field into my text table, I get an error saying:
ERROR: function regexp_matches(jsonb, unknown) does not exist;
Error while executing the query
Worth noting is that my data source is PostgreSQL, which Tableau regex functions support; not all of my entries have numOfPiesBaked in them; when I run this in a simulator I get the correct extraction (actually, I get "numOfPiesBaked": 1" but removing the field name is a problem for another time).
What might be causing this error?
In short: Wrong data type, wrong function, wrong approach.
REGEXP_EXTRACT is obviously an abstraction layer of your client (Tableau), which is translated to regexp_matches() for Postgres. But that function expects text input. Since there is no assignment cast for jsonb -> text (for good reasons) you have to add an explicit cast to make it work, like:
SELECT regexp_matches("Bakery Activity"::text, '"numOfPiesBaked":"?([^\n,}]*)')
(The second argument can be an untyped string literal, Postgres function type resolution can defer the suitable data type text.)
Modern versions of Postgres also have regexp_match() returning a single row (unlike regexp_matches), which would seem like the better translation.
But regular expressions are the wrong approach to begin with.
Use the simple json/jsonb operator ->>:
SELECT "Bakery Activity"->>'numOfPiesBaked';
Returns '1' in your example.
If you know the value to be a valid integer, you can cast it right away:
SELECT ("Bakery Activity"->>'numOfPiesBaked')::int;
I found an easier way to handle JSONB data in Tableau.
Firstly, make a calculated field from the JSONB field and convert the field to a string by using str([FIELD_name]) command.
Then, on the calculated field, make another calculated field and use function:
REGEXP_EXTRACT([String_Field_Name], '"Key_to_be_extracted":"?([^\n,}]*)')
The required key-value pair will form the second caluculated field.

AnalysisException: Syntax error in line 1: error when taking modulus of a value using abs() in Impala

I want to take the modulus of a value when using Impala and I am aware of the abs() function. When I use this however like such
select abs(value) from table
It returns a value that is rounded to the nearest integer. The documentation found here states that I need to define the numeric_type. have tried this
select abs(float value) from table
but this gives me the following error
AnalysisException: Syntax error in line 1: ... abs(float value) from table ^ Encountered: FLOAT Expected: ALL, CASE, CAST, DEFAULT, DISTINCT, EXISTS, FALSE, IF, INTERVAL, NOT, NULL, TRUNCATE, TRUE, IDENTIFIER CAUSED BY: Exception: Syntax error
Any ideas how I set abs() to return a float?
This should work SELECT cast(Abs(-243.5) as float) AS AbsNum
I think you are misunderstanding the syntax. You call the function as abs(val). The return type is the same as the input type. It should work on integers, decimals, and floats.
If you want a particular type being returned, then you need to pass in that type, perhaps casting to the specific type.
The documentation is:
abs(numeric_type a)
Purpose: Returns the absolute value of the argument.
Return type: Same as the input value
Admittedly, this does look like the type should be part of the function call. But it is really using a programming language-style declaration to show the types that are expected.

Casting from a Packed(8) type to a TMSTMP (DEC15) type in a Unicode system (and back)

Background:
I have several tables that are connected for maintenance in a view cluster (SE54). Each of these tables have the standard Created/Changed By/On fields. For created data updating the fields are easy, and I use event 05 (On Create) in the Table Maintenance generator. For defaulting the changing fields it's a little bit more involved. I have to use event 01 (Before Save), and then update the tables TOTAL[] and EXTRACT[] with the field values as needed.
When maintaining the table in SM30, the format of TOTAL[] and EXTRACT[] is the same as the view I'm maintaining with an additional flag to identify what type of change is made (update/create/delete)
However, when maintaining in SM54 (which is the business requirement), the format of TOTAL[] and EXTRACT[] is just an internal table of character lines.
Problem:
I can figure out what the type of the table that is being edited is. But when I try to move the character line to the type line I get the following run-time errors: (Depending on how I try to move/assign it)
ASSIGN_BASE_TOO_SHORT
UC_OBJECTS_NOT_CONVERTIBLE
UC_OBJECTS_NOT_CHAR
All my structures are in the following format:
*several generic (flat) types
CREATED TYPE TMSTMP, "not a flat type
CHANGED TYPE TMSTMP, "not a flat type
CREATED_BY TYPE ERNAM,
CHANGED_BY TYPE AENAM,
The root of the problem is that the two timestamp fields are not flat types. I can see in the character line, that the timestamps are represented by 8 Characters.
Edit: Only after finding the solution could I identify the Length(8) field as packed.
I have tried the following in vain:
"try the entire structure - which would be ideal
assign ls_table_line to <fs_of_the_correct_type> casting.
"try isolating just the timestamp field(s)
assign <just_the_8char_representation> to <fs_of_type_tmpstmp> casting.
I've tried a few other variations on the "single field only" option with no luck.
Any ideas how I can cast from the Character type to type TMSTMP and then back again in order to update the internal table values?
I've found that the following works:
In stead of using:
field-symbols: <structure> type ty_mystructure,
<changed> type tmstmp.
assign gv_sapsingle_line to <structure> casting. "causes a runtime error
assign gv_sap_p8_field to <changed> casting. "ditto
I used this:
field-symbols: <structure> type any,
<changed> type any.
assign gv_sapsingle_line to <structure> casting type ty_mystructure.
assign gv_sap_p8_field to <changed> casting type ty_tmstmp.
For some reason it didn't like that I predefined the field symbols.
I find that odd as the documentation states the following:
Casting with an Implicit Type Declaration Provided the field symbol is
either fully typed or has one of the generic built-in ABAP types – C,
N, P, or X – you can use the following statement:
ASSIGN ... TO <FS> CASTING.
When the system accesses the field symbol, the content of the
assigned data object is interpreted as if it had the same type as the
field symbol.
I can only assume that my structure wasn't compatible (due to the P8 -> TMSTMP conversion)
The length and alignment of the data object must be
compatible with the field symbol type. Otherwise the system returns a
runtime error. If the type of either the field symbol or the data
object is – or contains – a string, reference type, or internal table,
the type and position of these components must match exactly.
Otherwise, a runtime error occurs.