PostgreSQL error : the "PostAudienceEnum" type already exists - sql

I generated a migrate.sql file from the prisma ORM which I then imported into my PostgreSQL database from Query Tool. However when I run this file I get the error: the "PostAudienceEnum" type already exists . I don't know why yet I did declare PostAudienceEnum as an enum . Here are the lines where we find the enumeration PostAudienceEnum:
-- CreateEnum
CREATE TYPE "PostAudienceEnum" AS ENUM ('PUBLIC', 'FRIENDS', 'ONLY_ME', 'SPECIFIC');
CREATE TABLE "Post" (
...
"audience" "PostAudienceEnum" NOT NULL DEFAULT E'PUBLIC',
...
CONSTRAINT "Post_pkey" PRIMARY KEY ("id")
);
This file was designed from my prisma schematic. I don't know how to modify it without messing up my database and why PostgreSQL throws this error.

You might be getting this error if the database already has data, and you're attempting to manually execute the SQL file against the database. You should only use prisma migrate to manage changes in your schema against your database. Prisma internally records what migrations have and have not been executed against the database, and attempting to run an SQL file manually outside of prisma migrate defeats the purpose of using it in the first place.
Migrate docs: https://www.prisma.io/docs/concepts/components/prisma-migrate
You should ONLY use Prisma Migrate to make changes to your database tables/columns/relationships, and not an external tool. However, if you are developing your database FIRST and then looking to keep your Prisma schema up to date with your database (and not the other way around), you will want to introspect your database. Same deal applies: Prisma knows what parts of your database are reflected in your Prisma schema and what's not.
Introspection docs: https://www.prisma.io/docs/concepts/components/introspection

Related

Databasechangelog file already exists error in postgres

I tried to setup default schema, it didn't work. I am using postgres db and trying to test it on my local system.
I am not using any application. I am just trying to update db directly using changelog

Laravel migration - is it possible to use SQL instead of schema commands to create tables and fields etc?

We have an existing complex database schema complete with indexes, constraints, triggers, tables etc.
With liquibase, you can point to pure sql files in your changesets, which could be a dump of the whole DB for the first (initial schema creation) migration.
Is there any way to do this with the laravel artisan migration system?
We would like to do all our db updates using the SQL language (because we know it already, and because we will only ever user mysql), but need the framework (migrate or liquibase) to apply the changes in the right order etc. (so they keep a log on the DB of the changes already applied etc).
If not, has anyone used liqubase with laravel? The only issue is that it wont be able to read the .env db connection strings, and that each developer will need to install liqubase (not the end of the world, but if the laravel built in system can use sql, it would save us time and effort)
Yes, it is possible to create migrations which use raw SQL
You are not limited to what code you can run in your migrations. Run raw SQL queries using the DB facade. This example shows both methods being used in the same migration.
class AddColumnsToUsersTable extends Migration
{
/**
* Run the migrations.
*
* #return void
*/
public function up()
{
Schema::table('users', function (Blueprint $table) {
$table->string('name');
$table->string('age');
$table->timestamps();
});
DB::update('update users set age = 30 where name = ?', ['John']);
}
}
I believe there are plenty of benefits of using raw / plain SQL migrations in certain cases
complex schema(s),
taking advantage of vendor specific extensions to SQL (both DDL and DML)
richer / additional data types
using vendor specific procedural code (i.e. PL/pgSQL anonymous blocks in PostgreSQL)
etc
Although there is a way to execute raw SQL in Laravel migrations, writing and managing raw SQL code in variables even with heredoc is painful and unnatural.
Full disclaimer: here goes a shameless plug
Recently I've created an experimental package laravel-sql-migrations that I use in my projects to abstract the details of raw SQL execution and allow for writing and keeping SQL migrations in *.sql files almost like in Liquibase or Flyway.
The package among other things extends make:migration and make:model commands so that you can use a familiar workflow
php artisan make:migration create_users_table --sql
or
php artisan make:model User --migration --sql
which will produce three files
database
└── migrations
├── 2018_06_15_000000_create_users_table.down.sql
├── 2018_06_15_000000_create_users_table.php
└── 2018_06_15_000000_create_users_table.up.sql
At this point you can forget about 2018_06_15_000000_create_users_table.php unless you want to configure or override behavior of this particular migration (i.e. set a specific connection and / or make use of transactions if your database supports them for DDL i.e. PostgreSQL).
If you don't use reverse / down migrations you can delete the corresponding *.down.sql file.
Here is how migrations for the standard Laravel users table might look like if you were to use PostgreSQL
-- 2018_06_15_000000_create_users_table.up.sql
CREATE TABLE IF NOT EXISTS users (
id BIGSERIAL PRIMARY KEY,
name CITEXT,
email CITEXT,
password TEXT,
remember_token TEXT,
created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP
);
CREATE UNIQUE INDEX IF NOT EXISTS users_email_idx ON users (email);
-- 2018_06_15_000000_create_users_table.down.sql
DROP TABLE IF EXISTS users;

Extract and recreate DDL of database schema elsewhere

I guess I just cannot formulate the search query appropriately, but I cannot find an answer to the following simple question: how to use extracted DDL pieces to recreate tables, views etc. in a different database or a different schema?
For example, when I extract table DDL with
SELECT dbms_metadata.get_dependent_ddl ('TABLE', TABLE-NAME, SCHEMA) FROM dual
I get output with FOREIGN KEY there. If I now naively issue the resulting CREATE TABLE statements on a different database in e.g. alphabetical order of table names, I get "table or view doesn't exist" error, because constraints reference to non-yet-created tables.
What is the normal procedure of using DDL? Is it (easily) possible to recreate full scheme structure (short of full database dump) without using external tools?
You can use datapump export CONTENT option to only export the metadata for a schema:
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]
ALL unloads both data and metadata. This is the default.
DATA_ONLY unloads only table row data; no database object definitions are unloaded.
METADATA_ONLY unloads only database object definitions; no table row data is unloaded. Be aware that if you specify CONTENT=METADATA_ONLY, then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.
The import process will create the objects and constraints, taking the dependencies into account.
If you want to see the DDL, and optionally run it manually, you can use the datapump import SQLFILE option to put the DDL into a file instead of executing it:
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
You can do similar things through SQL Developer and other clients, but those are 'external tools', whereas datapump might not fall into that category, even if you have to run it from the command line. There is a datapump API so you can even avoid the command line if you want to, though in some ways it's more complicated than using the expdp and impdp utilities.

how to export the schema of the database to export it to another server?

I have a database with data, but I would like to export the schema of the data base to be able to create an empty data base.
I create the script, I select only tables and views, no users, because the idea it's install the data base in many computers with different users. The permissions I will manage individualy.
Well, in the next step, in advaned options, I select that I want triggers, foreign checks and all the other options and I create the script.
However I have some problems:
When I delete my data base from the server and I use the script, I get the error that says that the data base does not exists. Is it possible in the script add the option to create the data base?
If I create the data base manually, if I use the script I get an error that says that a column name is not valid.
At this point I was wondering where is the correct way to create a script of the schema to export it to another servers?
Thanks so much.

Doctrine schema changes while keeping data?

We're developing a Doctrine backed website using YAML to define our schema. Our schema changes regularly (including fk relations) so we need to do a lot of:
Doctrine::generateModelsFromYaml(APPPATH . 'models/yaml', APPPATH . 'models', array('generateTableClasses' => true));
Doctrine::dropDatabases();
Doctrine::createDatabases();
Doctrine::createTablesFromModels();
We would like to keep existing data and store it back in the re-created database. So I copy the data into a temporary database before the main db is dropped.
How do I get the data from the "old-scheme DB copy" to the "new-scheme DB"? (the new scheme only contains NEW columns, NO COLUMNS ARE REMOVED)
NOTE:
This obviously doesn't work because the column count doesn't match.
SELECT * FROM copy.Table INTO newscheme.Table
This obviously does work, however this is consuming too much time to write for every table:
SELECT old.col, old.col2, old.col3,'somenewdefaultvalue' FROM copy.Table as old INTO newscheme.Table
Have you looked into Migrations? They allow you to alter your database schema in programmatical way. WIthout losing data (unless you remove colums, of course)
How about writing a script (using the Doctrine classes for example) which parses the yaml schema files (both the previous version and the "next" version) and generates the sql scripts to run? It would be a one-time job and not require that much work. The benefit of generating manual migration scripts is that you can easily store them in the version control system and replay version steps later on. If that's not something you need, you can just gather up changes in the code and do it directly through the database driver.
Of course, the more fancy your schema changes becomes, the harder the maintenance will get i.e. column name changes, null to not null etc.