I'm currently trying to get some dummydata for my project. So I setup a docker file and I am trying to load in some data from a csv file. I'm not getting any error's just an empty database.
Docker:
version: "3.3"
services:
maria-db:
image: 'mariadb:latest'
ports:
- '3307:3306'
environment:
MYSQL_DATABASE: 'researchprojectdb'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'user'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'admin'
volumes:
- ./scripts/researchprojectdb.sql:/docker-entrypoint-initdb.d/1.sql
- ./data:/var/lib/mysql-files
SQL Script:
CREATE TABLE ROLES (
role_id int PRIMARY KEY,
role varchar(20)
);
CREATE TABLE USERS (
user_id int PRIMARY KEY,
firstname varchar(83),
lastname varchar(83),
email varchar(83),
password varchar(83)
role_id int,
foreign key (role_id) references roles(role_id)
);
INSERT INTO ROLES VALUES(1, 'student');
INSERT INTO USERS VALUES(1,'Jan', 'Met de pet', 'jan#depet.com', 'depet', 1);
-- LOAD DATA LOCAL INFILE '/var/lib/mysql-files/rolesdata.csv' into TABLE ROLES FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' IGNORE 0 ROWS;
-- LOAD DATA LOCAL INFILE '/var/lib/mysql-files/usersdata.csv' into TABLE USERS FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' IGNORE 0 ROWS;
and then I use 2 .csv files with the following data:
Roles:
1 student
2 administrator
3 company
Userdata
1 Jan heas jan#metdepet.be depet 1
Related
I have the following table in PostgreSQL:
CREATE TABLE cars (id SERIAL PRIMARY KEY,
car_id SERIAL REFERENCES car_models (id) ON DELETE CASCADE);
When using COPY with the following:
COPY cars FROM '/Users/my-user/cars.csv' DELIMITER ',' CSV HEADER;
Containing:
id, car_id
1, 4
2, 3
3, 9
Then my primary key aren't incremented, so calling afterwards:
insert into cars (car_id) values (11)
fails with:
ERROR: duplicate key value violates unique constraint "cars_pkey"
DETAIL: Key (id)=(1) already exists.
It's easy to solve this problem, as below, you can set the start value of the sequence after you copy data into your table (your_now_max_value, for example 123).
alter sequence cars_id_seq restart with your_now_max_value;
The script shell copy_cars.sh maybe has 4 lines as below:
psql -d database -c"copy cars from xx.txt with delimiter ','"
max_id=`psql -d database -c"copy(select max(id) from cars) to stdout"`
max_id=$(($max_id + 1))
psql -d database -c"alter sequence cars_id_seq restart with ${max_id}"
Of course, you can add some alert code to ensure the robustness.Then you can set a scheduler for the script to achieve your aim of twice a month.
There is a flag 'EXPLICIT_IDS' which is used for this purpose.
Try using
COPY cars FROM '/Users/my-user/cars.csv' DELIMITER ',' CSV HEADER EXPLICIT_IDS;
Hope this helps.
Here is how I am doing it:
CREATE EXTENSION IF NOT EXISTS postgres_fdw;
DROP SERVER IF EXISTS myserver CASCADE;
CREATE SERVER myserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '10.1.1.1', dbname 'mydb', port '5432');
DROP USER MAPPING IF EXISTS FOR user SERVER myserver;
CREATE USER MAPPING FOR user
SERVER myserver
OPTIONS (user 'user', password 'password');
CREATE FOREIGN TABLE IF NOT EXISTS foriegnemployee(
id int,
name text,
is_done boolean
)
SERVER myserver
OPTIONS (schema_name 'myschema', table_name 'employee');
When I run the following query, it says table mydb.employee does not not exist:
Update foriegnemployee set is_done=true where id in (select id from sometable);
sometable is a local table. employee table is in a remote db
I tried to mockup your tables:
t=# create database b;
CREATE DATABASE
t=# \c b
You are now connected to database "b" as user "vao".
b=# create table employee (is_done boolean, user_id int);
CREATE TABLE
b=# insert into employee select false,1;
INSERT 0 1
b=# \c t
You are now connected to database "t" as user "vao".
t=# create extension postgres_fdw;
CREATE EXTENSION
t=# create server b foreign data wrapper postgres_fdw options (dbname 'b');
CREATE SERVER
t=# create user mapping for vao server b;
CREATE USER MAPPING
t=# create foreign table ftb (is_done boolean, user_id int) server b options (table_name 'employee');
CREATE FOREIGN TABLE
t=# Update ftb set is_done=true where user_id in (select user_id from employee);
UPDATE 1
code works.
Either server or table have wrong options I suppose
This is my init.sql file :
DROP DATABASE IF EXISTS my_data;
CREATE DATABASE my_data;
DROP USER IF EXISTS u;
CREATE USER u WITH PASSWORD 'secret';
GRANT ALL ON ALL TABLES IN SCHEMA public TO u;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO u;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO u;
\c my_data;
CREATE TABLE users (
ID SERIAL PRIMARY KEY NOT NULL,
email VARCHAR NOT NULL DEFAULT '',
password VARCHAR NOT NULL DEFAULT '',
active SMALLINT NOT NULL DEFAULT '1',
created TIMESTAMP NOT NULL,
modified TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
salt VARCHAR NOT NULL DEFAULT ''
);
Then if I :
psql -f init.sql
And..
psql -d my_data -U u
my_data=> select * from users;
ERROR: permission denied for relation users
Why would this permission be denied if I just granted it?
More info
my_data=> \z users
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-------+-------+-------------------+--------------------------
public | users | table | |
(1 row)
You only gave permission on the schema, which is separate from the tables. If you want to give permissions on the tables also you can use
GRANT ALL ON ALL TABLES IN SCHEMA public TO u;
Note that if you have sequences or other objects they also need separate permissions.
This has to be set after the tables have been created since permissions are set for existing objects.
If you want to set default permissions for new objects you can use
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO u;
I am executing flyway using maven.
I have a sql(It has DDL create tables - DEPARTMENT , EMPLOYEE as shown below)
I run mvn compile flyway:migrate
Here is the console log.
......
[INFO] Successfully validated 3 migrations (execution time 00:00.097s)
[DEBUG] Schema "PUBLIC" already exists. Skipping schema creation.
[DEBUG] Locking table "PUBLIC"."schema_version"...
[DEBUG] Lock acquired for table "PUBLIC"."schema_version"
[INFO] Current version of schema "PUBLIC": 1
[INFO] Migrating schema "PUBLIC" to version 1.1 - department
[DEBUG] Found statement at line 2: CREATE TABLE Department (
ID INTEGER GENERATED ALWAYS AS IDENTITY(START WITH 1) PRIMARY KEY,
NAME VARCHAR(32) NOT NULL ,
DESCRIPTION VARCHAR(100)
)
[DEBUG] Found statement at line 8: CREATE TABLE EMPLOYEE (
ID INTEGER GENERATED ALWAYS AS IDENTITY(START WITH 1) PRIMARY KEY ,
NAME VARCHAR(100) NOT NULL ,
DEPARTMENTID INTEGER FOREIGN KEY REFERENCES PUBLIC.DEPARTMENT(ID)
)
[DEBUG] Executing SQL: CREATE TABLE Department (
ID INTEGER GENERATED ALWAYS AS IDENTITY(START WITH 1) PRIMARY KEY,
NAME VARCHAR(32) NOT NULL ,
DESCRIPTION VARCHAR(100)
)
The execution hangs after first create table i.e Department table.
Then I kill using ctrl+C I see schema_version and Department table alone gets created.
I tried other ways of create table i.e without ID Generation , adding ';' at end , adding GO after each CREATE TABLE but it did not help.
The same create table sql runs successfully using Squirrel SQL client.
I am using flyway version 4.0.3 and hsqldb 2.3.4.
On debug it is seen that it is waiting for response I/O from DB to complete at this point =>
boolean hasResults = statement.execute(sql); within the class org.flywaydb.core.internal.dbsupport.JdbcTemplate
UPDATE:
As mentioned by Fredt , this issue is not seen when using hsqldb-2.3.3
This issue is specific to HSQLDB 2.3.4 which has more strict locking. You can set the transaction model to MVCC to avoid the issue. Alternatively use HSQLDB version 2.3.3.
Is there a way we can add comments in liquibase file which are not parsed by the program?
We are using the text format for the changes.sql and this is how it looks
--changeset Sapan.Parikh:MyUniqueAlphaNumericId5
--comment: Table created for liquibase testing purpose with non numeric id
--6:10 PM 11/10/2015
create table liqui_test11 (
id int primary key,
name varchar(255)
);
create table liqui_test9 (
id int primary key,
name varchar(255)
);
create table liqui_test10 (
id int primary key,
name varchar(255)
);
Our organization has been using similar change log for years and while migrating to Liquibase we want to be able to do two things.
Add dashes or hashes after each changeset.
And after every production build we add a comment at the end of the changes file.
For instance
--changeset Sapan.Parikh:MyUniqueAlphaNumericId5
--comment: Table created for liquibase testing purpose with non numeric id
--6:10 PM 11/10/2015
create table liqui_test11 (
id int primary key,
name varchar(255)
);
-----------------------------------------------------------------
--changeset Sapan.Parikh:MyUniqueAlphaNumericId4
--comment: Table created for liquibase testing purpose with non numeric id
--6:10 PM 11/10/2015
create table liqui_test12 (
id int primary key,
name varchar(255)
);
###------------------Build 10.0.1 Made-------------------
Now if we add just dashes- or # the luqibase task is breaking and DB upgrade does not happen. Is there a way to accommodate comments which are not parsed by liquibase engine?
You can just persist your comments and strip them right before executing liquibase
- can be done easily using sed