I created a table in hive with complex column (e.g. Struct Type) with comments.
Table creation was success.
But when i try to view or describe the table, column comments are not displayed.
Create Table Is Success
CREATE TABLE TEST
(
Col1 STRING COMMENT 'column 1',
Col2 STRUCT <
Col21 :STRING COMMENT 'Column 2 row 1',
Col22 :STRING COMMENT 'Column 2 row 2',
Col23 :STRING COMMENT 'Column 2 row 3'
>
)
COMMENT 'Table Level comment'
But Describe Table does not display comments for complex column.
Describe Formatted TEST;
+-------------------------------+----------------------------------------------------+-----------------------+--+
| col_name | data_type | comment |
+-------------------------------+----------------------------------------------------+-----------------------+--+
| # col_name | data_type | comment |
| | NULL | NULL |
| col1 | string | column 1 |
| col2 | struct<Col21:string,Col22:string,Col23:string> | |
Related
I have a table "table1" like this:
+------+--------------------+
| id | barcode | lot |
+------+-------------+------+
| 0 | ABC-123-456 | |
| 1 | ABC-123-654 | |
| 2 | ABC-789-EFG | |
| 3 | ABC-456-EFG | |
+------+-------------+------+
I have to extract the number in the center of the column "barcode", like with this request :
SELECT SUBSTR(barcode, 5, 3) AS ToExtract FROM table1;
The result:
+-----------+
| ToExtract |
+-----------+
| 123 |
| 123 |
| 789 |
| 456 |
+-----------+
And insert this into the column "lot" .
follow along the lines
UPDATE table_name
SET column1 = value1, column2 = value2, ...
WHERE condition;
i.e in your case
UPDATE table_name
SET lot = SUBSTR(barcode, 5, 3)
WHERE condition;(if any)
UPDATE table1 SET Lot = SUBSTR(barcode, 5, 3)
-- WHERE ...;
Many databases support generated (aka "virtual"/"computed" columns). This allows you to define a column as an expression. The syntax is something like this:
alter table table1 add column lot varchar(3) generated always as (SUBSTR(barcode, 5, 3))
Using a generated column has several advantages:
It is always up-to-date.
It generally does not occupy any space.
There is no overhead when creating the table (although there is overhead when querying the table).
I should note that the syntax varies a bit among databases. Some don't require the type specification. Some use just as instead of generated always as.
CREATE TABLE Table1(id INT,barcode varchar(255),lot varchar(255))
INSERT INTO Table1 VALUES (0,'ABC-123-456',NULL),(1,'ABC-123-654',NULL),(2,'ABC-789-EFG',NULL)
,(3,'ABC-456-EFG',NULL)
UPDATE a
SET a.lot = SUBSTRING(b.barcode, 5, 3)
FROM Table1 a
INNER JOIN Table1 b ON a.id=b.id
WHERE a.lot IS NULL
id | barcode | lot
-: | :---------- | :--
0 | ABC-123-456 | 123
1 | ABC-123-654 | 123
2 | ABC-789-EFG | 789
3 | ABC-456-EFG | 456
db<>fiddle here
How to concat columns data using loop in Postgres?
I have this table:
+------+------+------+--------+--------+--------+
| col1 | col2 | col3 | other1 | other2 | other3 |
+------+------+------+--------+--------+--------+
| 1 | 1 | 1 | 1 | 1 | 1 |
| 2 | 2 | 2 | 2 | 2 | 2 |
+------+------+------+--------+--------+--------+
and want to concat columns (col*).
Expected output:
+----------------+--------+--------+--------+
| concatedcolumn | other1 | other2 | other3 |
+----------------+--------+--------+--------+
| **1**1**1** | 1 | 1 | 1 |
| **2**2**2** | 2 | 2 | 2 |
+----------------+--------+--------+--------+
I can concat using:
select concat('**', col1, '**',col2, '**', col3, '**') as concatedcolumn
,other1, other2, other3
from sample_table
I have some 200 columns with prefix "col" and don't want to spell out all columns in sql. How could I achieve this with a loop?
Questionable database design aside, you can generate the SELECT statement dynamically:
SELECT 'SELECT concat_ws(''**'', '
|| string_agg(quote_ident(attname), ', ') FILTER (WHERE attname LIKE 'col%')
|| ') AS concat_col, '
|| string_agg(quote_ident(attname), ', ') FILTER (WHERE attname NOT LIKE 'col%')
|| ' FROM public.tbl;' -- your table name here
FROM pg_attribute
WHERE attrelid = 'public.tbl'::regclass -- ... and here
AND attnum > 0
AND NOT attisdropped;
db<>fiddle here
Query the system catalog pg_attribute or, alternatively, the information schema table columns. I prefer the system catalog.
Related answer on dba.SE discussing "information schema vs. system catalogs"
Execute in a second step (after verifying it's what you want).
No loop involved. You can build the statement dynamically, but you cannot (easily) return the result dynamically as SQL demands to know the return type at execution time.
concat_ws() is convenient, but it ignores NULL values. I didn't deal with those specially. You may or may not want to do that. Related:
Combine two columns and add into one new column
How to concatenate columns in a Postgres SELECT?
I want to create a table where each column is a distinct value retrieved from select query.
Example:
[Query]
SELECT DISTINCT col
FROM table
[Result]
col
---
val1
val2
val3
val4
Requested table:
Column1 | Column2 | Column3 | ... | ColumnN
-------------------------------------------
val1 | val2 | val3 | ... | valN
There is an unknown number of distinct values. All columns should be create as type TEXT.
Is this possible using SQL without procedures?
Thanks.
You have to use a dynamic command. If you do not want to create a function, use an anonymous code block, example:
create table table_cols(col text);
insert into table_cols values
('col1'),
('col2'),
('col3');
do $$
begin
execute format('create table new_table(%s text)', string_agg(distinct col, ' text, '))
from table_cols;
end
$$
Check:
\d new_table
Table "public.new_table"
Column | Type | Collation | Nullable | Default
--------+------+-----------+----------+---------
col1 | text | | |
col2 | text | | |
col3 | text | | |
I'm using Aginity Workbench for Netezza for the first time.
Does anyone know how to list columns and column types? The typical SQL code snippets I found online don't seem to work.
Thanks!
This snippet should do what you want.
SELECT
tablename,
attname AS COL_NAME,
b.FORMAT_TYPE AS COL_TYPE,
attnum AS COL_NUM
FROM _v_table a
JOIN _v_relation_column b
ON a.objid = b.objid
WHERE a.tablename = 'ATT_TEST'
AND a.schema = 'ADMIN'
ORDER BY attnum;
TABLENAME | COL_NAME | COL_TYPE | COL_NUM
-----------+-------------+----------------------+---------
ATT_TEST | COL_INT | INTEGER | 1
ATT_TEST | COL_NUMERIC | NUMERIC(10,2) | 2
ATT_TEST | COL_VARCHAR | CHARACTER VARYING(5) | 3
ATT_TEST | COL_DATE | DATE | 4
(4 rows)
In a Postgres 9.3 database I have a table in which one column contains JSON, as in the test table shown in the example below.
test=# create table things (id serial PRIMARY KEY, details json, other_field text);
CREATE TABLE
test=# \d things
Table "public.things"
Column | Type | Modifiers
-------------+---------+-----------------------------------------------------
id | integer | not null default nextval('things_id_seq'::regclass)
details | json |
other_field | text |
Indexes:
"things_pkey" PRIMARY KEY, btree (id)
test=# insert into things (details, other_field)
values ('[{"json1": 123, "json2": 456},{"json1": 124, "json2": 457}]', 'nonsense');
INSERT 0 1
test=# insert into things (details, other_field)
values ('[{"json1": 234, "json2": 567}]', 'piffle');
INSERT 0 1
test=# select * from things;
id | details | other_field
----+-------------------------------------------------------------+-------------
1 | [{"json1": 123, "json2": 456},{"json1": 124, "json2": 457}] | nonsense
2 | [{"json1": 234, "json2": 567}] | piffle
(2 rows)
The JSON is always an array containing a variable number of hashes. Each hash always has the same set of keys. I am trying to write a query which returns a row for each entry in the JSON array, with columns for each hash key and the id from the things table. I'm hoping for output like the following:
thing_id | json1 | json2
----------+-------+-------
1 | 123 | 456
1 | 124 | 457
2 | 234 | 567
i.e. two rows for entries with two items in the JSON array. Is it possible to get Postgres to do this?
json_populate_recordset feels like an essential part of the answer, but I can't get it to work with more than one row at once.
select id,
(details ->> 'json1')::int as json1,
(details ->> 'json2')::int as json2
from (
select id, json_array_elements(details) as details
from things
) s
;
id | json1 | json2
----+-------+-------
1 | 123 | 456
1 | 124 | 457
2 | 234 | 567