PostgreSQL - How to cast dynamically? - sql

I have a column that has the type of the dataset in text.
So I want to do something like this:
SELECT CAST ('100' AS %INTEGER%);
SELECT CAST (100 AS %TEXT%);
SELECT CAST ('100' AS (SELECT type FROM dataset_types WHERE id = 2));
Is that possible with PostgreSQL?

SQL is strongly typed and static. Postgres demands to know the number of columns and their data type a the time of the call. So you need dynamic SQL in one of the procedural language extensions for this. And then you still face the obstacle that functions (necessarily) have a fixed return type. Related:
Dynamically define returning row types based on a passed given table in plpgsql?
Function to return dynamic set of columns for given table
Or you go with a two-step flow. First concatenate the query string (with another SELECT query). Then execute the generated query string. Two round trips to the server.
SELECT '100::' || type FROM dataset_types WHERE id = 2; -- record resulting string
Execute the result. (And make sure you didn't open any vectors for SQL injection!)
About the short cast syntax:
Postgres data type cast

Related

sql select query for impala column with map data type

For a table, say details, with the schema as,
Column
Type
name
string
desc
map<int, string>
How do I form a select query - to be run by java program - which expects the result set in this structure?
name
desc
Bob
{1,"home"}
Alice
{2,"office"}
Having in mind limitations in impala with regards to complex types here
The result set of an Impala query always contains all scalar types;
the elements and fields within any complex type queries must be
"unpacked" using join queries.
ie. select * from details; would only return results without the column with map type(complex type).
The closest I've come up with is select name, map_col.key, map_col.value from details, details.desc map_col;. Result set is not in expected format - obviously
.
Thanks in advance.

How do I change the data type of a calculated attribute in DML with PostgreSQL?

Knowing that SQL Data Definition Language (DDL) Server Postgres (SP) can be used:
CREATE
ALTER
DROP
And for SQL Data Manipulation Language (DML) SP:
SELECT
INSERT
UPDATE
DELETE
When we make a SELECT the attributes that we project come in its type of data established in the DDL. And when performing an operation with 2 attributes of the same type to generate an attribute calculated in a SELECT, the type will be the same as that of the operands but I need to change it.
My problem is that I want to calculate an average: A / B * 100 between two types of data bigint but when I do A/B I obtain in the calculated field a 0, because I would need the decimal values also to multiply them by 100
The following sql instruction does not work with bigint:
SELECT id, a/b
FROM <mytable>;
This in the calculated field returns 0 values to me and is what I want to avoid.
And what I would like to have is:
SELECT id, first_operand * 100
FROM (SELECT id, a/b TYPE <newtype> AS first_operand
FROM <mytable>) firstTable
NATURAL JOIN <mytable>;
So what I would like to know is how to change the type of data in a SELECT projection or what way do I have this?
In Postgres you can easily cast by appending ::<type to cast to> to an expression. So in your case you can try a::decimal / b::decimal to have the result of a / b being a decimal. (Actually already casting only one operand would be enough here, if you like it less verbose.)
You can do typecasting using:
cast function i.e. CAST ( expression AS type )
or also:
PostgreSQL type cast :: i.e. expression::type

Conditional casting of column datatype

i have subquery, that returns me varchar column, in some cases this column contains only numeric values and in this cases i need to cast this column to bigint, i`ve trying to use CAST(case...) construction, but CASE is an expression that returns a single result and regardless of the path it always needs to result in the same data type (or implicitly convertible to the same data type). Is there any tricky way to change column datatype depending on condition in PostgreSQL or not? google cant help me((
SELECT
prefix,
module,
postfix,
id,
created_date
FROM
(SELECT
s."prefix",
coalesce(m."replica", to_char(CAST((m."id_type" * 10 ^ 12) AS bigint) + m."id", 'FM0000000000000000')) "module",
s."postfix",
s."id",
s."created_date"
FROM some_subquery
There is really no way to do what you want.
A SQL query returns a fixed set of columns, with the names and types being fixed. So, a priori what you want to do does not fit well within SQL.
You could work around this, by inventing your own type, that is either a big integer or a string. You could store the value as JSON. But those are work-arounds. The SQL query itself is really returning one "type" for each column; that is how SQL works.

When is the type of a column in a SQL query result determined?

When performing a select query from a data base the returned result will have columns of a certain type.
If you perform a simple query like
select name as FirstName
from database
then the type of the resulting FirstName column will be that of database.name.
If you perform a query like
select age*income
from database
then the resulting data type will be that of the return value from the age*income expression.
What happens you use something like
select try_convert(float, mycolumn)
from database
where database.mycolumn has type of nvarchar. I assume that the resulting column has type of float which is decided by the return type of the first call to try_convert.
But consider this example
select coalesce(try_convert(float, mycolumn), mycolumn)
from database
which should give a column with the values of mycolumn unchanged if try_convert fails, but mycolumn as a float when/if that is possible.
Is this determination made as the first row is handled?
Or will the type always be determined by the function called independently of the data in the rows?
Is it possible to conditionally perform a conversion?
I would like to convert to float in the case where this is possible for all rows and leave unchanged in case it fails for any row.
Update 1
It seems that the answer to the first part of the question is that the column type is determined by the expression at compile time which means that you cannot have a dynamic type of your column depending on the data.
I see two workaround for this
Option 1
For each column count the number of not null rows of try_convert(float, mycolumn) and if this number is 0 then do not perform conversion. This will of course read the rows many times and might be inefficient.
Option 2
Simple repeat all columns; once without conversion and once with conversion and then simply use the interesting one.
One could also perform another select statement where only columns with non-null values are included.
Background
I have a dynamically generated pivot table with many (~200 columns) of which some have string values and others have numbers.
I would like to cast all columns as float where this is possible and leave the other columns unchanged (or cast as nvarchar).
The data
The data is mostly NULL values with some columns having text string and other columns having numbers. There are no columns with "mixed" content.
The types are determined at compile time, not at execution. try_convert(float, ...) knows exactly the type at parse/compile time, because float here is a keyword, not a value. As for expressions like COALESCE(foo, bar) the type similarly determined at compile time, following the rules of data type precedence lad already linked.
When you build your dynamic pivot you'll have to know the result type, using the same inference rules the SQL parser/compiler uses. I understand some rules are counter intuitive, when in doubt, test it out.
For the detail oriented: some expressions types can be determined at parse time, eg. N'foo'. But most have to be resolved at compile time, when the names of tables and columns are bind to actual object in the database, because only then the type is discovered.

integer Max value constants in SQL Server T-SQL?

Are there any constants in T-SQL like there are in some other languages that provide the max and min values ranges of data types such as int?
I have a code table where each row has an upper and lower range column, and I need an entry that represents a range where the upper range is the maximum value an int can hold(sort of like a hackish infinity). I would prefer not to hard code it and instead use something like SET UpperRange = int.Max
There are two options:
user-defined scalar function
properties table
In Oracle, you can do it within Packages - the closest SQL Server has is Assemblies...
I don't think there are any defined constants but you could define them yourself by storing the values in a table or by using a scalar valued function.
Table
Setup a table that has three columns: TypeName, Max and Min. That way you only have to populate them once.
Scalar Valued Function
Alternatively you could use scalar valued functions GetMaxInt() for example (see this StackOverflow answer for a real example.
You can find all the max/min values here: http://msdn.microsoft.com/en-us/library/ms187752.aspx
Avoid Scalar-Functions like the plague:
Scalar UDF Performance Problem
That being said, I wouldn't use the 3-Column table another person suggested.
This would cause implicit conversions just about everywhere you'd use it.
You'd also have to join to the table multiple times if you needed to use it for more than one type.
Instead have a column for each Min and Max of each Data Type (defined using it's own data type) and call those directly to compare to.
Example:
SELECT *
FROM SomeTable as ST
CROSS JOIN TypeRange as TR
WHERE ST.MyNumber BETWEEN TR.IntMin AND TR.IntMax