release postgresql extension - sql

I'm developing application that holds data in postgres. So i must prepare database before working with application, there must be created few tables. I'm creating this tables by running sql code but i think it's not convenient after i found this doc:
A useful extension to PostgreSQL typically includes multiple SQL
objects; for example, a new data type will require new functions, new
operators, and probably new index operator classes. It is helpful to
collect all these objects into a single package to simplify database
management
The main advantage of using an extension, rather than just running the
SQL script to load a bunch of "loose" objects into your database, is
that PostgreSQL will then understand that the objects of the extension
go together
I believe that i must use this approach
What i don't understand is that how can i share my extension. I thought that it works like maven, you create your extension with custom types, functions, tables and than you can pack it, name it (eg my-ext-0.1), give a version and release into some kind of a repository. After that you can connect to a database, run sql 'create extension my-ext-0.1' and have everything done :)
I thought that 'create extension' command will download extension and install it without downloading this by hands. I use maven, ivy and i expected similar behaviour from postgresql.
Documentation says that you need to place your extension files under some directory and only than run 'create extension' under some database.
How do you create your extensions and share them between different servers?

Postgres extensions do not work like this. They can have access to database internals and can run any code as database OS user. Therefore installing them is typically limited only to superusers, from a specific directory and only some of them are available on managed hosting servers.
I though that you can achieve something similar with installing your supplemental functions, types and tables in a special schema which is added to a search path. Upgrade would then be as simple as:
drop schema mylib cascade; -- don't do this!!!
create schema mylib;
\i mylib.sql
But unfortunately this would also remove all dependent objects from other schemas - columns using a custom type, triggers using a custom function etc. So it's not a solution for your problem.

I'd rather create my functions, types and all in my schema, using available extensions and "standard" languages.
Postgres will not download your extension (unless you create extension that will add this functionality to postgres). But your extension should be still created "usual" way.
to check your "directory for extension", run:
t=# create extension "where should I put control file";
ERROR: could not open extension control file "/usr/local/share/postgresql/extension/where should I put control file.control": No such file or directory
And repeating comment, before extending SQL, please check out plpgsql and existing commands.
When you get bored and make sure existing postgres functionality is too limited, install postgres-contrib package and check other extensions as best practices. And of course check out https://pgxn.org/

Related

What's the recommended way to do database migrations with Ktor + Exposed (Kotlin)?

The Ktor or Exposed frameworks do not have any built-in support for database migrations. What's the recommended way to do this?
If you are using Ktor with Gradle I would recommend using Flyway programatically inside the entry point of your Application. This way it could easily be a part of your Continuous Delivery pipeline. You can see the Flyway docs using API here: https://flywaydb.org/documentation/api/
What I essentially do though is add the dependency (using Kotlin DSL):
implementation("org.flywaydb:flyway-core:6.5.2")
And then all you need to do is create an instance of Flyway and call migrate when you load your module:
fun Application.module() {
Flyway.configure().dataSource(/*config to your DB*/).load().migrate()
//the rest of your application
routing {
}
}
You could of course extract the creation of Flyway to your DI tool (e.g. Koin) and add some logging to show progress.
This way your DB will be migrated (if necessary) every time just before your app is started.
As for writing up the actual migration the official docs are very helpful. What you essentially need to do is:
Make sure you have the required directory for migration files - src/main/resources/db/migration by default ).
Write plain SQL in separate migration files in the above directory. The filenames need to stick to the convention too (by default: start with capital V then a number, which you will increment for each new migration, double underscore - this tricked me at the beginning ;) - and a snake_case description, e.g. V1__Create_person_table.sql)
Run the app and observe magic :D
Database tables are migrated at deployment via Flyway and scripts can be added in db/migrations folder to add new tables or execute queries like inserting data etc on server startup.
(https://github.com/arun0009/kotlin-ktor-exposed-sample-api)
Here's how Flyway works: https://flywaydb.org/getstarted/how
Download, install, and configure Flyway (See https://flywaydb.org/getstarted/firststeps/commandline)
Point it to a database, which will be the location of flyway_schema_history
Write your migration files for create table or insert data following their naming convention: V<version number>__<migration description> and run with flyway migrate
Write your repeatable migration files for create view following their naming convention: R__<migration description> and run with flyway migrate
Write your undo migration files for drop table or delete data following their naming convention: V<version number>__<migration description> and run with flyway migrate
Check your migration status with flyway info and commit your files if you're happy
Make any necessary modifications are rerun migration. Repeat and commit.
In case if this is a one time activity, you can try using a Off-the shelf utility like SQL Data Compare.
To make this happen, you would need to ensure that both the databases are accessible locally from your machine so that you can create 2 DB connections and run a comparison against them.
At the end of comparison, you can get a Auto-Generated SQL Script out of it to run against your new schema and make it sychronized.
In case if you wish to compare Schema Objects as well, again Red-Gate provides a similar Schema Compare tool, which they have now started to call as SQL Compare (God knows why !!). This utility would also provide similar auto-generated script to help you.
But, again Red Gate is good for one-time migration and you can use this with their Trial version for a period of 30 days. For similar activity in regular basis, you would need to buy the Licensed version of same software.
For Data Migrantion, I use Navicat Premium which i find very easy to use, but it is not an open source. If you are looking for an Open Source Tool, then You can use SQLines Data which is an open source (Apache License 2.0), scalable, parallel high performance data transfer and schema conversion tool that you can use for database migrations and ETL processes.
SQLines
It is available for Linux, Windows, both 64-bit and 32-bit platforms.
You can also use SQLines Data for cross-platform database migration. The tool migrates table definitions, constraints, indexes and transfers data.
This how you can start with SQLines:
Download and unzip the file, no installation is required
Run sqldataw.exe on Windows to launch the GUI version
Run ./sqldata on Linux to launch the command line tool.
And There are Migration guidelines available for specific databases.Guidelines

How to add breaking views to an Visual Studio SQL Server Database Project

I've created an SQL Server Database Project so that I can capture my database schema and add it to source control.
My problem is that the database contains Views which reference external databases. Given the business and project environment, this is an acceptable solution in the short tomedium term.
Sadly, this stops the database project from compiling, (since it don't contain the external database tables).
What are my options for getting around this error? I'm currently storing the schema in a single generated script, which is a pain to update.
Look at creating dacpac files out of the external databases and add them as database references. I did that by using the SQLPackage command line to generate the file, put the files in a "shared" folder (optional, but useful if this pattern persists with other projects), then add a database reference to the project. I recommend removing the variable for the DB name unless it can change in different environments. I blogged a bit about this here:
http://schottsql.blogspot.com/2012/10/ssdt-external-database-references.html
Now if it's a truly breaking change, I've done this through post-deploy scripts. Drop/recreate the view and reapply any permissions necessary. That's not ideal, but it can work.

Creating a local SQL database file?

Please note:
I am a game programmer, so backend development isn't my forte. There are times, however, where I work with our database at my job. Please don't shoot me if my question is ridiculous.
Is there a way to create a local mySQL file and access it through PHP or C#?
I know you can make a local webpage on your machine (pretty much for testing purposes) and access multiple locally created files.
I assume that something similar would work with mySQL. (Are the login credentials also stored within the file?) I remember seeing a few online tutorials where it offered a download for both PHP and the database file, but I can't seem to find them now.
I've searched for this, but all the relevant results involved downloading mySQL and hosting a server which is a bit more than I wanted to do.
So if its possible to create a local mySQL, how do you do so?
The tools I intend on using while doing this:
PHP/JQUERY/HTML and C#
For MyISAM tables, inside the MySQL data directory there is one directory per database which contains several (usually three) files per table. For InnoDB tables, they are all contained in several files directly inside the data directory.
The location of the MySQL data directory is usually set in my.cnf using the datadir parameter.
The login credentials are stored in a special database called "mysql" which is in that data directory like any other database.
However, you have to install and run MySQL to access those files. You cannot access them with PHP or any other client API alone. If you want to do such a thing, better use SQLite.
MySQL is a database engine, u need to install that before you can use it. Unlike SQLite which stores it's database in files. Maybe that is something more of your liking. And I know there are library that supports SQLite for PHP, not sure about the rest.
SQLite you don't need to install anything.
MySQL can be used an an embedded database, but you will need to contact them in order to purchase a copy of it.

Options for generating create scripts for all objects in Oracle schema

I have several schemas in Oracle that must be promoted through dev, test, staging and production environments.
I need a command line tool that can take a script based snapshot of the dev environment (generate create scripts for a schema and all of its child objects which include OWB mappings and workflows).
What options exist that can be triggered from the command line and will generate create scripts suitable for inclusion in a source control system? The command line functionality is significant because the process will be triggered by a CI server (TeamCity).
Check out the in-built DBMS_METADATA package.
Lots of examples of usage on stackoverflow (or just google)
While much of the table structures etc can be mapped across using a variety of tools - your OWB mappings cannot be simply copied into a new environment - they must be properly deployed using either the OWB GUI or an OMB+ script to a new environment in order to have them properly registered into the runtime repository. And how you do that will depend on how you have the repositories configured.
I had posted an OMB+ script to deploy to a clean environment on the Oracle message boards a couple of years back. OWB has progressed a version or two since - but it might provide you with a starting point for that aspect of things.
Use expdp to dump the schema, and impdp with the SQLFILE option to generate a file of SQL commands to re-create the objects.

Is there a version control system for database structure changes?

I often run into the following problem.
I work on some changes to a project that require new tables or columns in the database. I make the database modifications and continue my work. Usually, I remember to write down the changes so that they can be replicated on the live system. However, I don't always remember what I've changed and I don't always remember to write it down.
So, I make a push to the live system and get a big, obvious error that there is no NewColumnX, ugh.
Regardless of the fact that this may not be the best practice for this situation, is there a version control system for databases? I don't care about the specific database technology. I just want to know if one exists. If it happens to work with MS SQL Server, then great.
In Ruby on Rails, there's a concept of a migration -- a quick script to change the database.
You generate a migration file, which has rules to increase the db version (such as adding a column) and rules to downgrade the version (such as removing a column). Each migration is numbered, and a table keeps track of your current db version.
To migrate up, you run a command called "db:migrate" which looks at your version and applies the needed scripts. You can migrate down in a similar way.
The migration scripts themselves are kept in a version control system -- whenever you change the database you check in a new script, and any developer can apply it to bring their local db to the latest version.
I'm a bit old-school, in that I use source files for creating the database. There are actually 2 files - project-database.sql and project-updates.sql - the first for the schema and persistant data, and the second for modifications. Of course, both are under source control.
When the database changes, I first update the main schema in project-database.sql, then copy the relevant info to the project-updates.sql, for instance ALTER TABLE statements.
I can then apply the updates to the development database, test, iterate until done well.
Then, check in files, test again, and apply to production.
Also, I usually have a table in the db - Config - such as:
SQL
CREATE TABLE Config
(
cfg_tag VARCHAR(50),
cfg_value VARCHAR(100)
);
INSERT INTO Config(cfg_tag, cfg_value) VALUES
( 'db_version', '$Revision: $'),
( 'db_revision', '$Revision: $');
Then, I add the following to the update section:
UPDATE Config SET cfg_value='$Revision: $' WHERE cfg_tag='db_revision';
The db_version only gets changed when the database is recreated, and the db_revision gives me an indication how far the db is off the baseline.
I could keep the updates in their own separate files, but I chose to mash them all together and use cut&paste to extract relevant sections. A bit more housekeeping is in order, i.e., remove ':' from $Revision 1.1 $ to freeze them.
MyBatis (formerly iBatis) has a schema migration, tool for use on the command line. It is written in java though can be used with any project.
To achieve a good database change management practice, we need to identify a few key goals.
Thus, the MyBatis Schema Migration System (or MyBatis Migrations for short) seeks to:
Work with any database, new or existing
Leverage the source control system (e.g. Subversion)
Enable concurrent developers or teams to work independently
Allow conflicts very visible and easily manageable
Allow for forward and backward migration (evolve, devolve respectively)
Make the current status of the database easily accessible and comprehensible
Enable migrations despite access privileges or bureaucracy
Work with any methodology
Encourages good, consistent practices
Redgate has a product called SQL Source Control. It integrates with TFS, SVN, SourceGear Vault, Vault Pro, Mercurial, Perforce, and Git.
I highly recommend SQL delta. I just use it to generate the diff scripts when i'm done coding my feature and check those scripts into my source control tool (Mercurial :))
They have both an SQL server & Oracle version.
I wonder that no one mentioned the open source tool liquibase which is Java based and should work for nearly every database which supports jdbc. Compared to rails it uses xml instead ruby to perform the schema changes. Although I dislike xml for domain specific languages the very cool advantage of xml is that liquibase knows how to roll back certain operations like
<createTable tableName="USER">
<column name="firstname" type="varchar(255)"/>
</createTable>
So you don't need to handle this of your own
Pure sql statements or data imports are also supported.
Most database engines should support dumping your database into a file. I know MySQL does, anyway. This will just be a text file, so you could submit that to Subversion, or whatever you use. It'd be easy to run a diff on the files too.
If you're using SQL Server it would be hard to beat Data Dude (aka the Database Edition of Visual Studio). Once you get the hang of it, doing a schema compare between your source controlled version of the database and the version in production is a breeze. And with a click you can generate your diff DDL.
There's an instructional video on MSDN that's very helpful.
I know about DBMS_METADATA and Toad, but if someone could come up with a Data Dude for Oracle then life would be really sweet.
Have your initial create table statements in version controller, then add alter table statements, but never edit files, just more alter files ideally named sequentially, or even as a "change set", so you can find all the changes for a particular deployment.
The hardiest part that I can see, is tracking dependencies, eg, for a particular deployment table B might need to be updated before table A.
For Oracle, I use Toad, which can dump a schema to a number of discrete files (e.g., one file per table). I have some scripts that manage this collection in Perforce, but I think it should be easily doable in just about any revision control system.
Take a look at the oracle package DBMS_METADATA.
In particular, the following methods are particularly useful:
DBMS_METADATA.GET_DDL
DBMS_METADATA.SET_TRANSFORM_PARAM
DBMS_METADATA.GET_GRANTED_DDL
Once you are familiar with how they work (pretty self explanatory) you can write a simple script to dump the results of those methods into text files that can be put under source control. Good luck!
Not sure if there is something this simple for MSSQL.
I write my db release scripts in parallel with coding, and keep the release scripts in a project specific section in SS. If I make a change to the code that requires a db change, then I update the release script at the same time.
Prior to release, I run the release script on a clean dev db (copied structure wise from production) and do my final testing on it.
I've done this off and on for years -- managing (or trying to manage) schema versions. The best approaches depend on the tools you have. If you can get the Quest Software tool "Schema Manager" you'll be in good shape. Oracle has its own, inferior tool that is also called "Schema Manager" (confusing much?) that I don't recommend.
Without an automated tool (see other comments here about Data Dude) then you'll be using scripts and DDL files directly. Pick an approach, document it, and follow it rigorously. I like having the ability to re-create the database at any given moment, so I prefer to have a full DDL export of the entire database (if I'm the DBA), or of the developer schema (if I'm in product-development mode).
PLSQL Developer, a tool from All Arround Automations, has a plugin for repositories that works OK ( but not great) with Visual Source Safe.
From the web:
The Version Control Plug-In provides a tight integration between the PL/SQL Developer IDE >>and any Version Control System that supports the Microsoft SCC Interface Specification. >>This includes most popular Version Control Systems such as Microsoft Visual SourceSafe, >>Merant PVCS and MKS Source Integrity.
http://www.allroundautomations.com/plsvcs.html
ER Studio allows you to reverse your database schema into the tool and you can then compare it to live databases.
Example: Reverse your development schema into ER Studio -- compare it to production and it will list all of the differences. It can script the changes or just push them through automatically.
Once you have a schema in ER Studio, you can either save the creation script or save it as a proprietary binary and save it in version control. If you ever want to go back to a past version of the scheme, just check it out and push it to your db platform.
There's a PHP5 "database migration framework" called Ruckusing. I haven't used it, but the examples show the idea, if you use the language to create the database as and when needed, you only have to track source files.
We've used MS Team System Database Edition with pretty good success. It integrates with TFS version control and Visual Studio more-or-less seamlessly and allows us to manages stored procs, views, etc., easily. Conflict resolution can be a pain, but version history is complete once it's done. Thereafter, migrations to QA and production are extremely simple.
It's fair to say that it's a version 1.0 product, though, and is not without a few issues.
You can use Microsoft SQL Server Data Tools in visual studio to generate scripts for database objects as part of a SQL Server Project. You can then add the scripts to source control using the source control integration that is built into visual studio. Also, SQL Server Projects allow you verify the database objects using a compiler and generate deployment scripts to update an existing database or create a new one.
In the absence of a VCS for table changes I've been logging them in a wiki. At least then I can see when and why it was changed. It's far from perfect as not everyone is doing it and we have multiple product versions in use, but better than nothing.
I'd recommend one of two approaches. First, invest in PowerDesigner from Sybase. Enterprise Edition. It allows you to design Physical datamodels, and a whole lot more. But it comes with a repository that allows you to check in your models. Each new check in can be a new version, it can compare any version to any other version and even to what is in your database at that time. It will then present a list of every difference and ask which should be migrated… and then it builds the script to do it. It’s not cheap but it’s a bargain at twice the price and it’s ROI is about 6 months.
The other idea is to turn on DDL auditing (works in Oracle). This will create a table with every change you make. If you query the changes from the timestamp you last moved your database changes to prod to right now, you’ll have an ordered list of everything you’ve done. A few where clauses to eliminate zero-sum changes like create table foo; followed by drop table foo; and you can EASILY build a mod script. Why keep the changes in a wiki, that’s double the work. Let the database track them for you.
Schema Compare for Oracle is a tool specifically designed to migrate changes from our Oracle database to another. Please visit the URL below for the download link, where you will be able to use the software for a fully functional trial.
http://www.red-gate.com/Products/schema_compare_for_oracle/index.htm
Two book recommendations: "Refactoring Databases" by Ambler and Sadalage and "Agile Database Techniques" by Ambler.
Someone mentioned Rails Migrations. I think they work great, even outside of Rails applications. I used them on an ASP application with SQL Server which we were in the process of moving to Rails. You check the migration scripts themselves into the VCS.
Here's a post by Pragmatic Dave Thomas on the subject.