I have a column that has the type of the dataset in text.
So I want to do something like this:
SELECT CAST ('100' AS %INTEGER%);
SELECT CAST (100 AS %TEXT%);
SELECT CAST ('100' AS (SELECT type FROM dataset_types WHERE id = 2));
Is that possible with PostgreSQL?
SQL is strongly typed and static. Postgres demands to know the number of columns and their data type a the time of the call. So you need dynamic SQL in one of the procedural language extensions for this. And then you still face the obstacle that functions (necessarily) have a fixed return type. Related:
Dynamically define returning row types based on a passed given table in plpgsql?
Function to return dynamic set of columns for given table
Or you go with a two-step flow. First concatenate the query string (with another SELECT query). Then execute the generated query string. Two round trips to the server.
SELECT '100::' || type FROM dataset_types WHERE id = 2; -- record resulting string
Execute the result. (And make sure you didn't open any vectors for SQL injection!)
About the short cast syntax:
Postgres data type cast
Knowing that SQL Data Definition Language (DDL) Server Postgres (SP) can be used:
CREATE
ALTER
DROP
And for SQL Data Manipulation Language (DML) SP:
SELECT
INSERT
UPDATE
DELETE
When we make a SELECT the attributes that we project come in its type of data established in the DDL. And when performing an operation with 2 attributes of the same type to generate an attribute calculated in a SELECT, the type will be the same as that of the operands but I need to change it.
My problem is that I want to calculate an average: A / B * 100 between two types of data bigint but when I do A/B I obtain in the calculated field a 0, because I would need the decimal values also to multiply them by 100
The following sql instruction does not work with bigint:
SELECT id, a/b
FROM <mytable>;
This in the calculated field returns 0 values to me and is what I want to avoid.
And what I would like to have is:
SELECT id, first_operand * 100
FROM (SELECT id, a/b TYPE <newtype> AS first_operand
FROM <mytable>) firstTable
NATURAL JOIN <mytable>;
So what I would like to know is how to change the type of data in a SELECT projection or what way do I have this?
In Postgres you can easily cast by appending ::<type to cast to> to an expression. So in your case you can try a::decimal / b::decimal to have the result of a / b being a decimal. (Actually already casting only one operand would be enough here, if you like it less verbose.)
You can do typecasting using:
cast function i.e. CAST ( expression AS type )
or also:
PostgreSQL type cast :: i.e. expression::type
Is there a way to insert single record in db table without using structure as data holder.
I was thinking something like
INSERT INTO table(filed1, field2) VALUES (value1, value2).
Is something like this possible in abap? thx
No, this is not possible because you can't specify individual target fields with OpenSQL INSERT statements. You might be able to work around the need for a temporary structure on the right-hand side by using a VALUE type( ... ) operator, but I haven't tried that yet.
You should try to create a structure dynamically.
using CL_ABAP_STRUCTDESCR=>CREATE, specifying a list of components (list of column) and their types, you get dynamically a description of your structure that you can use to create a data reference.
Quick pseudo-code:
DATA coldescr TYPE REF TO cl_abap_structdescr.
DATA cols TYPE REF TO data.
FIELD-SYMBOLS <cols> TYPE any.
coldescr = cl_abap_structdescr=>create( ).
CREATE DATA cols TYPE HANDLE coldescr.
ASSIGN cols->* TO <cols>.
You can use <cols> in you insert select stmt.
I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values.
The query is something like
SELECT
t.field1,
t.field2,
ARRAY(--with each element containing two values i.e. {'TheName', 1 })
FROM MyTable t
I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?)
Any help is much appreciated.
ARRAYs can only hold elements of the same type
Your example displays a text and an integer value (no single quotes around 1). It is generally impossible to mix types in an array. To get those values into an array you have to create a composite type and then form an ARRAY of that composite type like you already mentioned yourself.
Alternatively you can use the data types json in Postgres 9.2+, jsonb in Postgres 9.4+ or hstore for key-value pairs.
Of course, you can cast the integer to text, and work with a two-dimensional text array. Consider the two syntax variants for a array input in the demo below and consult the manual on array input.
There is a limitation to overcome. If you try to aggregate an ARRAY (build from key and value) into a two-dimensional array, the aggregate function array_agg() or the ARRAY constructor error out:
ERROR: could not find array type for data type text[]
There are ways around it, though.
Aggregate key-value pairs into a 2-dimensional array
PostgreSQL 9.1 with standard_conforming_strings= on:
CREATE TEMP TABLE tbl(
id int
,txt text
,txtarr text[]
);
The column txtarr is just there to demonstrate syntax variants in the INSERT command. The third row is spiked with meta-characters:
INSERT INTO tbl VALUES
(1, 'foo', '{{1,foo1},{2,bar1},{3,baz1}}')
,(2, 'bar', ARRAY[['1','foo2'],['2','bar2'],['3','baz2']])
,(3, '}b",a{r''', '{{1,foo3},{2,bar3},{3,baz3}}'); -- txt has meta-characters
SELECT * FROM tbl;
Simple case: aggregate two integer (I use the same twice) into a two-dimensional int array:
Update: Better with custom aggregate function
With the polymorphic type anyarray it works for all base types:
CREATE AGGREGATE array_agg_mult (anyarray) (
SFUNC = array_cat
,STYPE = anyarray
,INITCOND = '{}'
);
Call:
SELECT array_agg_mult(ARRAY[ARRAY[id,id]]) AS x -- for int
,array_agg_mult(ARRAY[ARRAY[id::text,txt]]) AS y -- or text
FROM tbl;
Note the additional ARRAY[] layer to make it a multidimensional array.
Update for Postgres 9.5+
Postgres now ships a variant of array_agg() accepting array input and you can replace my custom function from above with this:
The manual:
array_agg(expression)
...
input arrays concatenated into array of one
higher dimension (inputs must all have same dimensionality, and cannot
be empty or NULL)
I suspect that without having more knowledge of your application I'm not going to be able to get you all the way to the result you need. But we can get pretty far. For starters, there is the ROW function:
# SELECT 'foo', ROW(3, 'Bob');
?column? | row
----------+---------
foo | (3,Bob)
(1 row)
So that right there lets you bundle a whole row into a cell. You could also make things more explicit by making a type for it:
# CREATE TYPE person(id INTEGER, name VARCHAR);
CREATE TYPE
# SELECT now(), row(3, 'Bob')::person;
now | row
-------------------------------+---------
2012-02-03 10:46:13.279512-07 | (3,Bob)
(1 row)
Incidentally, whenever you make a table, PostgreSQL makes a type of the same name, so if you already have a table like this you also have a type. For example:
# DROP TYPE person;
DROP TYPE
# CREATE TABLE people (id SERIAL, name VARCHAR);
NOTICE: CREATE TABLE will create implicit sequence "people_id_seq" for serial column "people.id"
CREATE TABLE
# SELECT 'foo', row(3, 'Bob')::people;
?column? | row
----------+---------
foo | (3,Bob)
(1 row)
See in the third query there I used people just like a type.
Now this is not likely to be as much help as you'd think for two reasons:
I can't find any convenient syntax for pulling data out of the nested row.
I may be missing something, but I just don't see many people using this syntax. The only example I see in the documentation is a function taking a row value as an argument and doing something with it. I don't see an example of pulling the row out of the cell and querying against parts of it. It seems like you can package the data up this way, but it's hard to deconstruct after that. You'll wind up having to make a lot of stored procedures.
Your language's PostgreSQL driver may not be able to handle row-valued data nested in a row.
I can't speak for NPGSQL, but since this is a very PostgreSQL-specific feature you're not going to find support for it in libraries that support other databases. For example, Hibernate isn't going to be able to handle fetching an object stored as a cell value in a row. I'm not even sure the JDBC would be able to give Hibernate the information usefully, so the problem could go quite deep.
So, what you're doing here is feasible provided you can live without a lot of the niceties. I would recommend against pursuing it though, because it's going to be an uphill battle the whole way, unless I'm really misinformed.
A simple way without hstore
SELECT
jsonb_agg(to_jsonb (t))
FROM (
SELECT
unnest(ARRAY ['foo', 'bar', 'baz']) AS table_name
) t
>>> [{"table_name": "foo"}, {"table_name": "bar"}, {"table_name": "baz"}]
Are there any constants in T-SQL like there are in some other languages that provide the max and min values ranges of data types such as int?
I have a code table where each row has an upper and lower range column, and I need an entry that represents a range where the upper range is the maximum value an int can hold(sort of like a hackish infinity). I would prefer not to hard code it and instead use something like SET UpperRange = int.Max
There are two options:
user-defined scalar function
properties table
In Oracle, you can do it within Packages - the closest SQL Server has is Assemblies...
I don't think there are any defined constants but you could define them yourself by storing the values in a table or by using a scalar valued function.
Table
Setup a table that has three columns: TypeName, Max and Min. That way you only have to populate them once.
Scalar Valued Function
Alternatively you could use scalar valued functions GetMaxInt() for example (see this StackOverflow answer for a real example.
You can find all the max/min values here: http://msdn.microsoft.com/en-us/library/ms187752.aspx
Avoid Scalar-Functions like the plague:
Scalar UDF Performance Problem
That being said, I wouldn't use the 3-Column table another person suggested.
This would cause implicit conversions just about everywhere you'd use it.
You'd also have to join to the table multiple times if you needed to use it for more than one type.
Instead have a column for each Min and Max of each Data Type (defined using it's own data type) and call those directly to compare to.
Example:
SELECT *
FROM SomeTable as ST
CROSS JOIN TypeRange as TR
WHERE ST.MyNumber BETWEEN TR.IntMin AND TR.IntMax