Auto UUID Creation on PostgreSQL COPY Command - sql

I'm copying some data into a postgres database, and the CSV files do not have UUIDs. Currently the schema for the DB looks like this:
CREATE TABLE customers(
id UUID NOT NULL DEFAULT gen_random_uuid(),
customer_name VARCHAR(255) NOT NULL,
customer_age INT,
PRIMARY KEY(id)
);
The COPY statement is as follows:
COPY customers FROM '/usr/app/data/customers.csv' HEADER csv;
And the CSV file is shaped like this:
customer_name,customer_age
"Peter",12
"Sam",13
How can I automatically create the UUID for the customers data on the COPY command if it doesn't already exist in the CSV file? Currently this COPY command fails because it expects the customer_name row to be the UUID.

Sorry! This one is straightforward, you must specify the columns you're copying over:
COPY items (customer_name, customer_age) FROM '/usr/app/data/customers.csv' HEADER csv;

Related

How to add missing values of file on COPY command

I have a table with a column that comes from a file, although I'm certain that for the other column the value is missing on the file.
Here's the table:
create table if not exists user(
id varchar(36) primary key,
relevance varchar(3) not null,
constraint relevance_check check (relevance in ('ONE', 'TWO'))
);
The command I want to populated the table with:
copy user(id') from '/home/users_ids.txt';
The problem is that my column relevance is not null, and I'd like to set default values on the relevance column when copying, but I'm not sure if that's possible.
I cant set a default value on the tables because I need to import data from many files and each one would have a different value on the relevance field.
Can I achieve what I want using the copy command or there's another approach for this?
You can set the column's default value just for a while, e.g.:
alter table "user" alter relevance set default 'ONE';
copy "user"(id) from '/home/users_one.txt';
alter table "user" alter relevance set default 'TWO';
copy "user"(id) from '/home/users_two.txt';
alter table "user" alter relevance drop default;
The solution is simple and efficient when you are sure that the import takes place in only one session at a time but is not safe if you intend to use it simultaneously in more than one session. A safer alternative, in this case, could be to use a temporary table, e.g .:
create temp table ids(id text);
copy ids(id) from '/home/users_one.txt';
insert into "user"
select id, 'ONE'
from ids;

Run .sql script for a particular database

I'm using Postgres's official Docker image. I have a script under /docker-entrypoint-initdb.d/ that creates two different databases for me. I want to initialize the table schemas for a particular database using a .sql file. The .sql file is very simply and looks something like this:
CREATE TABLE foo (
id INT PRIMARY KEY,
data VARCHAR
)
CREATE TABLE bar (
id INT PRIMARY KEY,
data VARCHAR
)
What I would like this to do is something like this (I'm making up this syntax):
FOR database_a AS username
CREATE TABLE foo (
id INT PRIMARY KEY,
data VARCHAR
)
CREATE TABLE bar (
id INT PRIMARY KEY,
data VARCHAR
)
Is this possible? Seems like a common feature but I can't find any information on it.

H2 Database fails to find existing column

My Configuration file:
# H2
spring.h2.console.enabled=true
spring.h2.console.path=/h2
# Datasource
spring.datasource.url=jdbc:h2:file:~/test
spring.datasource.username=sa
spring.datasource.password=
spring.datasource.driver-class-name=org.h2.Driver
my data.sql script is something like :
CREATE TABLE IF NOT EXISTS people (
ID INT AUTO_INCREMENT NOT NULL PRIMARY KEY,
vname varchar(255) not null
);
INSERT INTO people(vname) VALUES ('Chuck Norris');
When this is executed, INSERT fails with error :
cannot find 'VNAME' column.
Why is the column name auto all capsed? does that affect my INSERT command?
I just created the table, why cant INSERT find vname column?
Did you perhaps already create the table PEOPLE without the VNAME column? Your SQL won't touch it if the table already exists. Remove the database files and try again...

Use Oracle Apex to read csv line by line and validate data

Now, I installed Oracle database and Apex into my local server.
I also create a table
CREATE TABLE FAC_CODE_CPQLDN(
ID NUMBER(10),
CODE NVARCHAR2(100) NOT NULL,
NAME NVARCHAR2(1000),
DESCRIPTION BLOB,
PARENT_ID NUMBER(10),
CONSTRAINT FAC_CODE_CPQLDN_PK PRIMARY KEY (ID)
);
How can I use Oracle Apex to read csv file line by line, store into "list_value" to validate before inserting into database?
If your csv less than 50 columns ( I think it is your case)
Excel2Collection [Plug-in]
Else
Implement your own PL/SQL PROCEDURE PARSE_CSV

Derby DB modify 'GENERATED' expression on column

I'm attempting to alter a Derby database that has a table like this:
CREATE TABLE sec_merch_categories (
category_id int NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
packageName VARCHAR(100) NOT NULL,
name VARCHAR(100) NOT NULL,
primary key(category_id)
);
I'd like to change the category_id column to be:
category_id int NOT NULL GENERATED BY DEFAULT AS IDENTITY (START WITH 1, INCREMENT BY 1),
The only place I've seen that documents this is IBM's DB2, which advises dropping the expression, changing the integrity of the table, and adding the expression back. Is there anyway of doing this in Derby?
Thanks,
Andrew
You can create a new table with the schema you want (but a different name), then use INSERT INTO ... SELECT FROM ... to copy the data from the old table to the new table (or unload the old table and reload it into the new table using the copy-data system procedures), then use RENAME TABLE to rename the old table to an alternate name and rename the new table to its desired name.
And, as #a_horse_with_no_name indicated in the above comment, all of these steps are documented in the Derby documentation on the Apache website.