Postgres - How to cast an array of enum_1 to enum_2? - sql

I'm trying to modify the values of an enum in my schema ("feature" in the below example).
I'm trying to do this by renaming the old enum and introducing a new one that has the values I want, and then altering the table definition to the new enum.
I'm following this blog post here: https://blog.yo1.dog/updating-enum-values-in-postgresql-the-safe-and-easy-way/.
But instead of the column being a simple column of the enum, my column is actually an array of the enum.
When I try to run the alter table statement in the below statements, I get the error:
[42804] ERROR: column "features" is of type feature_old[] but expression is of type feature_v2[] Hint: You will need to rewrite or cast the expression.
alter type feature rename to feature_old;
create type feature_v2 as enum (
'enable_create_keyword',
'enable_make_payment',
'enable_test_data_flags'
);
-- ... cleanup of column array values to be compatible with new enum ...
alter table app_user alter column features type feature_v2
using features::feature_old[]::feature_v2[];
drop type feature_old;
But, I'm lost - what should the cast expression look like?
Postgres version is 9.6
EDIT
This is the relevant part of the previous version's schema DDL for the feature enum and app_user table that was requested by #VaoTsun.
-- feature enum and column
create type feature as enum ('enable_create_keyword', 'enable_make_payment');
comment on type feature is
'if default functionality is disabled feature name starts with enable_, if default is enabled starts with disable_'
;
alter table app_user add column
features feature[] not null default ARRAY[]::feature[];
-- feature data
update app_user
set features = ARRAY['enable_create_keyword', 'enable_make_payment']::feature[]
where email = 'test1#example.com';
update app_user
set features = ARRAY['enable_create_keyword']::feature[]
where email = 'test2#example.com';

Thanks to both Vao Tsun and Nick Barnes; this is the code that appears to work for me. I have marked Vao Tsun's answer as correct. Any answers that provide a more concise version would be gratefully upvoted.
alter type feature rename to feature_old;
create type feature_v2 as enum (
'enable_create_keyword',
'enable_make_payment',
'enable_test_data_flags'
);
alter table app_user alter column features drop default ;
alter table app_user alter column features type feature_v2[]
using features::feature_old[]::text[]::feature_v2[];
alter table app_user alter column features set default ARRAY[]::feature_v2[];
drop type feature_old;

with assumption that old enum has same values, but less, you should be able to simply cast it's value to text and then to a v2:
try this:
t=# create or replace function feature2v2(feature_old) returns feature_v2 as
$$
select $1::text::feature_v2;
$$
language sql strict;
CREATE FUNCTION
t=# create cast (feature_old AS feature_v2) WITH FUNCTION feature2v2(feature_old) AS ASSIGNMENT;
CREATE CAST
gives me:
t=# alter table app_user alter column features type feature_v2[]
using features::feature_v2[];
ALTER TABLE
t=# \d+ app_user
Table "postgres.app_user"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------+--------------+-----------+----------+------------------------+----------+--------------+-------------
email | text | | | | extended | |
features | feature_v2[] | | not null | ARRAY[]::feature_old[] | extended | |
t=# \dT+ feature_v2
List of data types
Schema | Name | Internal name | Size | Elements | Owner | Access privileges | Description
----------+------------+---------------+------+------------------------+----------+-------------------+-------------
postgres | feature_v2 | feature_v2 | 4 | enable_create_keyword +| postgres | |
| | | | enable_make_payment +| | |
| | | | enable_test_data_flags | | |
(1 row)
which looks like what you expect
UPDATE
catching up with Nick Barnes comments - creating a cast here is overhead and leaves bad defualt for column, the right approach her is:
alter table app_user alter column features drop default;
alter table app_user alter column features type feature_v2[] using features::feature_old[]::text[]::feature_v2[];
alter table app_user alter column features set default ARRAY[]::feature_v2[];
Leaving the previous version untouched to demonstrate the bad approach with hints how it is bad

Related

Hive alter table column fails when it has struct column

I've created hive external table.
CREATE EXTERNAL TABLE test_db.test_table (
`testfield` string,
`teststruct` struct<teststructfield:string>
)
ROW FORMAT SERDE
'org.apache.hive.hcatalog.data.JsonSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://some/path';
hive> describe test_table;
+-------------+---------------------------------+--------------------+
| col_name | data_type | comment |
+-------------+---------------------------------+--------------------+
| testfield | string | from deserializer |
| teststruct | struct<teststructfield:string> | from deserializer |
+-------------+---------------------------------+--------------------+
and I want to alter table column. but when table has struct column (teststruct),
error occurs with < less than sign.
ALTER TABLE test_db.test_table CHANGE COLUMN testfield testfield2 string;
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Starting task [Stage-0:DDL] in serial mode
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Error: type expected at the position 7 of 'string:<derived from deserializer>' but '<' is found.
It succeed without struct column which has <. what should I do for this problem?
If nothing else helps, as a workaround you can drop/create table and recover partitions. The table is EXTERNAL and drop will not affect the data.
(1) Drop table
DROP TABLE test_db.test_table;
(2) Create table with required column name
CREATE EXTERNAL TABLE test_db.test_table (
testfield2 string,
teststruct struct<teststructfield:string>
)
PARTITIONED BY (....)
ROW FORMAT SERDE
'org.apache.hive.hcatalog.data.JsonSerDe'
LOCATION
'hdfs://some/path';
(3) Recover partitions
MSCK REPAIR TABLE test_db.test_table;
or if you are running Hive on EMR:
ALTER TABLE test_db.test_table RECOVER PARTITIONS;

SQL Error [XX000]: ERROR: Already present: The column already exists - Yugabyte DB

SQL Alter script:
alter table table_name ADD COLUMN IF NOT EXISTS chk_col jsonb not null
DEFAULT '{}'::jsonb;
In this script, I am getting "ERROR: Already present: The column already exists" but the column chk_col is not present in the table.
If I remove DB and then create again the same script is executed successfully.
How do I correct it without removing the database?
Can you explain more your setup ?
What version are you using ?
Are there needed other steps to reproduce ?
I can't reproduce in 2.7.1.1:
yugabyte=# create database fshije
yugabyte=# \c fshije
fshije=# create table table_name(id bigint);
fshije=# alter table table_name ADD COLUMN IF NOT EXISTS chk_col jsonb not null DEFAULT '{}'::jsonb;
fshije=# \d table_name
Table "public.table_name"
Column | Type | Collation | Nullable | Default
---------+--------+-----------+----------+-------------
id | bigint | | |
chk_col | jsonb | | not null | '{}'::jsonb

Create table with a variable name

I need to create tables on daily basis with name as date in form at (yyMMdd), I tried this :
dbadmin=> \set table_name 'select to_char(current_date, \'yyMMdd \')'
dbadmin=> :table_name;
to_char
---------
150515
(1 row)
and then tried to create table with table name from the set parameter :table_name, but got this
dbadmin=> create table :table_name(col1 varchar(1));
ERROR 4856: Syntax error at or near "select" at character 14
LINE 1: create table select to_char(current_date, 'yyMMdd ')(col1 va...
Is there a way where i could store a value in a variable and then use that variable as table name or to assign priority that the inner select statement has execute first to give me the name i require.
Please suggest!!!
Try this
for what ever reason the variable stored comes with some space and i had to remove it and also cannot start naming table starting with numbers so i had to add something in form like tbl_
in short you just need to store the value of the exit so you need to do some extra work and execute the query.
\set table_name `vsql -U dbadmin -w d -t -c "select concat('tbl_',replace(to_char(current_date, 'yyMMdd'),' ',''))"`
Create table:
create table :table_name(col1 varchar(1));
(dbadmin#:5433) [dbadmin] *> \d tbl_150515
Schema | public
Table | tbl_150515
Column | col1
Type | varchar(1)
Size | 1
Default |
Not Null | f
Primary Key | f
Foreign Key |

Altering JSON column to INTEGER[] ARRAY

I have a JSON column that contains an array of integers. I am trying to convert it to an INTEGER[] column, but I'm running into casting errors.
Here's my final alter version:
ALTER TABLE namespace_list ALTER COLUMN namespace_ids TYPE INTEGER[] USING string_to_array(namespace_ids::integer[], ',');
However, this throws this error:
ERROR: cannot cast type json to integer[]
Any ideas how I can abouts this conversion? I've tried several things but I end up with the same error. Seems like going json --> string --> --> array does not work. What are my options?
Edit:
Table definition:
db => \d+ namespace_list;
Column | Type | Table "kiwi.namespace_list" Modifiers|
---------------+----------+--------------------------------------+
id | integer | not null default nextval('namespace_list_id_seq'::regclass)
namespace_ids | json | not null default '[]'::json
Sample data:
id | namespace_ids |
-------------------+
1 | [1,2,3] |
Assuming no invalid characters in your array.
IN Postgres 9.4 or later use a conversion function as outlined here:
How to turn JSON array into Postgres array?
CREATE OR REPLACE FUNCTION json_arr2int_arr(_js json)
RETURNS int[] LANGUAGE sql IMMUTABLE PARALLEL SAFE AS
'SELECT ARRAY(SELECT json_array_elements_text(_js)::int)';
ALTER TABLE namespace_list
ALTER COLUMN namespace_ids DROP DEFAULT
, ALTER COLUMN namespace_ids TYPE int[] USING json_arr2int_arr(namespace_ids);
db<>fiddle here
For Postgres 9.3 or older:
ALTER TABLE namespace_list
ALTER COLUMN namespace_ids TYPE INTEGER[]
USING translate(namespace_ids::text, '[]','{}')::int[];
The specific difficulty is that you cannot have a subquery expression in the USING clause, so unnesting & re-aggregating is not an option:
SELECT ARRAY(SELECT(json_array_elements(json_col)::text::int))
FROM namespace_list;
Therefore, I resort to string manipulation to produce a valid string constant for an integer array and cast it.
column DEFAULT
If there is a column default like DEFAULT '[]'::json in your actual table definition added later, drop it before you do the above.
You can add a new DEFAULT afterwards if you need one. Best in the same transaction (or even command):
ALTER TABLE namespace_list
ALTER COLUMN namespace_ids DROP DEFAULT
, ALTER COLUMN namespace_ids TYPE INT[] USING translate(namespace_ids::text, '[]','{}')::int[]
, ALTER COLUMN namespace_ids SET DEFAULT '{}';
db<>fiddle here
Old sqlfiddle

Postgres - Cast to boolean

I'm trying to update a Postgres database to set a boolean but I'm getting the following error
No operator matches the given name and argument type(s). You may need
to add explicit type casts.
I've cut down the table description to show it's structure.
Column | Type | Modifiers
--------------------+-----------------------------+-----------
archived | boolean |
The column in the db is currently empty so I have no others to use as a comparison.
I've tried the following:
UPDATE table_name SET archived=TRUE WHERE id=52;
UPDATE table_name SET archived='t' WHERE id=52;
UPDATE table_name SET archived='1' WHERE id=52;
UPDATE table_name SET archived='t'::boolean WHERE id=52;
Neither of these have worked.
How do I cast my UPDATE to a boolean?
UPDATE: full error message
play_mercury=# UPDATE opportunities SET archived=TRUE WHERE id=(52,55,35,17,36,22,7,2,27,15,10,9,13,5,34,40,30,23,21,8,26,18,3,42,25,20,41,28,19,14,39,44,16,24,4,33,54,47,29,38,64);
ERROR: operator does not exist: bigint = record
HINT: No operator matches the given name and argument type(s). You may need to add explicit type casts.
The problem is in the WHERE id=(52,55,...)
Use: WHERE id IN (52,55,...)
Your WHERE condition is wrong. You need to use IN instead of =
UPDATE opportunities
SET archived=TRUE
WHERE id IN (52,55,35,17,36,22,7,2,27,15,10,9,13,5,34,40,30,23,21,8,26,18,3,42,25,20,41,28,19,14,39,44,16,24,4,33,54,47,29,38,64);
How are you trying this? Via plsql?
According to the docs those should be valid and this also works for me:
tmp=# create table bar (a boolean, b int);
CREATE TABLE
tmp=# insert into bar values (TRUE, 1);
INSERT 0 1
tmp=# update bar set a=false where b=1;
UPDATE 1
tmp=# \d bar
Table "public.bar"
Column | Type | Modifiers
--------+---------+-----------
a | boolean |
b | integer |