Flyway repeatable migration for a view can't be dropped - sql

We have a bunch of views in postgres that are created as repeatable migrations by Flyway.
The error that we have encountered is that if we want to rename a column by using CREATE OR REPLACE VIEW, postgres throws an error and cannot do so.
One option is to drop the view first. But this causes a problem if something else depends on the view, which will also throw an error.
Is there any way of dealing with this without having to write complicated scripts to drop any tables/views that depend on this view as that will also require recreating the other views. This process can get very messy and wondering if there is a more elegant solution?

You cannot use CREATE OR REPLACE for it because it was designed to extend column list:
CREATE VIEW
CREATE OR REPLACE VIEW is similar, but if a view of the same name already exists, it is replaced. The new query must generate the same columns that were generated by the existing view query (that is, the same column names in the same order and with the same data types), but it may add additional columns to the end of the list. The calculations giving rise to the output columns may be completely different.
Options:
make changes that are backward compatible i.e. only adding new columns
drop and recreate view(you need to handle the object dependencies)
Flyway is migration-based tool, you could search for "state-based" migration tool for PostgreSQL(SQL Server has SSDT). Related State- or migrations-based database development
6 Version Control tools for PostgreSQL
State-based tools - generate the scripts for database upgrade by comparing database structure to the model (etalon).
Migration-based tools - help/assist creation of migration scripts for moving database from one version to next.

Related

how to create history tables or audit tables on ALL_IND_COLUMNS?

I am writing a utility which keeps track of dropped indices or missing indices. I got to know about 2 index tables namely ALL_IND_COLUMNS and ALL_INDEXES which contains all the indices associated to each table in the database. I'm using ALL_IND_COLUMNS because it even contains column names.
Now i want to create a history table which keeps track of all the changes to ALL_IND_COLUMNS. I had thought of writing a trigger so that when there is an insert , delete or update on ALL_IND_COLUMNS the data all be inserted to history table but I heard there will be performance issue if we create a triggers on data dictionary tables. So, I want to know if there is any better alternative to solve this problem in SQL or PL/SQL. Im using using oracle 11g.
Thanks in advance.
Indexes are NOT meant to be created and dropped frequently. Even if you make so frequent changes, then you should be able to track the changes using source code version control.
There are many tools available for VERSION CONTROL. You should install and create required tags and branches for your database objects. Any modification to the database objects should go through database version control.
For example, the scripts that you use to create/drop the indexes, should be in the version control under INDEXES.
Checkout the code/scripts from repository to you local directory
Make necessary modifications
Test it locally
Check in your changes with required description
I personally use Subversion for my database version control.
For more details, read this link Using Source Code Control in Oracle SQL Developer
Read this wiki link about Revision control, also known as version control and source control

AS400 SQL query similar to CLRLIB (clear library) in native AS400

I'm working on a AS400 database and I need to manipulate library/collection with sql.
I need to recreate something similar to the CLRLIB command but I don't find a good way to do this.
Is there a way to delete all the table from a library with a sql query ?
Maybe I can drop the collection and create a new one with the same name. But I don't know if this is a good way to clear the library.
RESOLVE :
Thanks to Buck Calabro for his solution.
I use the following query to call the CLRLIB in SQL :
CALL QSYS.QCMDEXC('CLRLIB LIB_NAME ASPDEV(ASP_NAME)', 0000000032.00000)
Where LIB_NAME is the name of the library I want to clear, ASP_NAME is the name of the ASP where the library is and 0000000032.00000 is the command lenght.
(note that the term COLLECTION has been deprecated, SCHEMA is the current term)
Since a library can contain both SQL and non-SQL objects, there's no SQL way to delete every possible object type.
Dropping the schema and recreating it might work. But note that if the library is in a job's library list, it will have a lock on it and you will not be able to drop it. Also, unless the library was originally created via CREATE SCHEMA (or CREATE COLLECTION) you're going to end up with differences.
CRTLIB creates an empty library, CREATE SCHEMA creates a library plus objects needed for automatic journaling and a dozen or so SQL system views.
Read Charles' answer - there may be objects in your schema that you want to keep (data areas, programs, display and printer files, etc.) If the problem is to delete all of the tables so you can re-build all of the tables, then look at the various system catalog tables: SYSTABLES, SYSVIEWS, SYSINDEXES, etc. The system catalog 'knows' about all of the SQL tables, indexes, views, stored procedures, triggers and so on. You could read the catalog and issue the appropriate SQL DROP statements.

Expected database model is inconsistent in real-time

In this question, I was facing an issue where I was writing an update for a deployed application to bring the database up to date with the newer version we are deploying. Basic outline as follows:
Began with currently deployed version of application
Added new functionality that used existing database
Added new database tables and relationships
Added new functionality that depended on the new databse structure
Testing complete, ready for deployment
The issue here is that the currently deployed application has been in use for a few months and has a lot of data that would need to be preserved, so simply replacing the old with the new was not viable (at least not for the database, but of course it works for the code). So I used the following steps to write a script in SQL for the updated version of the application to run the first time it starts up to make the necessary changes to the database without touching existing data (aside from populating the new tables):
Use VS2010's "Generate database from model" functionality to create a .sql (the model was originally created using the "Generate model from database" functionality)
Remove all parts of the .sql that act on the existing tables, except for those that add FKs between new and old tables
Use the resulting script to build the new database
Sounds pretty clean and done, right? Wrong. The mapping from the model to the database was all wrong for the new tables. Long story short, the database that generated the model had tables named in the plural (and the mapping was correct and the application worked), and the database generated by the model created tables in the plural (identical names to what the tables where the DB generated the model, but the model did not map to them). The solution ended up being to change the script to name the tables in the singular, and then everything worked flawlessly.
What happened here? The code remained untouched, no changes were made to the model, and the old tables continued to work fine the entire time, yet somewhere in the process of
Generate script
Delete "new" tables and constraints (those that don't yet exist in the deployed version)
Run script to re-add the tables
the mapping decided to be to singularly named tables (User instead of Users, Address instead of Addresses, etc).
Can anyone explain to me how/why this would happen this way?
You might want to look at some of the tools that redgate supply - good tools for comparing two DB structures and generating a script to update.
http://www.red-gate.com/?utm_source=google&utm_medium=cpc&utm_content=brand_aware&utm_campaign=redgate&gclid=CIamkumgw6sCFcYPfAodnGVjsQ

Keeping a database Schema upto date

I'm writing an application that is using a database (currently MySQL 4) to store data.
It is likely that I will make changes to this in the form of updates later to add additional data. Updating the application is simple, it essentially comes down to overwriting the program files with the new ones. However how do I go about updating the database schema?
The database is remote and so my application might exist in several places, so simply dumping the ALTER and CREATE statements in an installer would result in the changes being made multiple times, and I have been asked explicitly for an automatic solution that allows for the application copies to be updated over a transition period, and for schema updates to be automatic.
I considered examining the schema at start-up to look for missing tables and columns, and adding them as needed, however this does not seem like a clean solution. I also considered putting some kind of “schema version” number on the database, but can’t see any way to do this short of a single row table with an int “Version” column which doesn’t seem a good way either.
I can highly recommend Liquibase. It really does work - I've used it and was very impressed.
Essentially, it keeps its own log of statements run on a database and runs them only if not already run/needed. It is XML driven and allows you to use optional pre- and post-execution statements and conditions. You check your XML files into your source control and invoke it from your build tool. It's even suitable for driving production releases.
It's magic.
Rather than rolling your own system for versioning your database it's probably worth looking into an existing framework that will manage it for you.
I use liquibase and have integrated into my build using the maven plugin. Worth checking out!
Just as you proposed, add a table where you store the current version of the database schema. Then you only have to apply the changes between your last schema update and the new release, and set the new version number accordingly. I've done this to update our production database about 300 times, it just works.

Scripts for moving schema changes from development database to production database

I'm trying to head this one off at the pass. I've got two database servers (DEV and PRD) and I have my database on the DEV server. I am looking to deploy v1 of my application to PRD server.
The question is this: Say in two months, I am ready to ship v1.1 of my application, which introduces two new VIEWS, six new fields (three fields in two tables, each), and an updated version of my sproc that creates records in the tables with new fields. My DEV database has the new schema, but my PRD database has the real data, so I can't simply copy the .mdf file, since I want to keep my PRD data, but include my new schema.
I understand doing the initial creation of tables, views, sprocs via saved .sql files; but what I'm wondering is, is it possible to use SSMS to create the appripriate "alter table" scripts or do I need to manually do this?
I have handled this with a release update SQL script that applies the changes to the previous version.
You either need to code this yourself or use one of the many DBA tools to do database compares and generate a diff script.
There are tools that will do this for you SQL Compare is one of them and one I like the best
Otherwise you have to code these yourself and don't forget to also script the permissions if you recreate the proc (unless you use ALTER PROC in that case permissions are preserved)
Since your database changes should be in scripts that are under source control, you just load them with the version that you are moving to prod just like any other code associated with that version. One you you never under any circumstances do is make changes to the dev (or any other) datbase, using the User interface.
Try the patching engine found in DBSourceTools.
http://dbsourcetools.codeplex.com
DBSourceTools is a utility to help developers get their databases under source control.
Simply point it at a Source Database, and it will script all database objects, incuding data to disk.
Once you have a Target database (v1), you can then place your patch scripts int the patches directory, and DBSourceTools will run these patches in order after re-creating your database.
This is a very effective means of thoroughly testing your change scripts.