Is their any way to ignore the dependencies while creating the database objects?
for example, i want to create a function on the database that uses the table(s), but i want this function to be created before creating the tables.
I do not see any need for this. Do you see an advantage in doing this?
In oracle, you can create a function or package or procedure without the dependent objects present in the database but when you compile it says compiled with error and if you have to use them, then you have to re-compile the object again.
Related
We have a bunch of views in postgres that are created as repeatable migrations by Flyway.
The error that we have encountered is that if we want to rename a column by using CREATE OR REPLACE VIEW, postgres throws an error and cannot do so.
One option is to drop the view first. But this causes a problem if something else depends on the view, which will also throw an error.
Is there any way of dealing with this without having to write complicated scripts to drop any tables/views that depend on this view as that will also require recreating the other views. This process can get very messy and wondering if there is a more elegant solution?
You cannot use CREATE OR REPLACE for it because it was designed to extend column list:
CREATE VIEW
CREATE OR REPLACE VIEW is similar, but if a view of the same name already exists, it is replaced. The new query must generate the same columns that were generated by the existing view query (that is, the same column names in the same order and with the same data types), but it may add additional columns to the end of the list. The calculations giving rise to the output columns may be completely different.
Options:
make changes that are backward compatible i.e. only adding new columns
drop and recreate view(you need to handle the object dependencies)
Flyway is migration-based tool, you could search for "state-based" migration tool for PostgreSQL(SQL Server has SSDT). Related State- or migrations-based database development
6 Version Control tools for PostgreSQL
State-based tools - generate the scripts for database upgrade by comparing database structure to the model (etalon).
Migration-based tools - help/assist creation of migration scripts for moving database from one version to next.
I'm working on a AS400 database and I need to manipulate library/collection with sql.
I need to recreate something similar to the CLRLIB command but I don't find a good way to do this.
Is there a way to delete all the table from a library with a sql query ?
Maybe I can drop the collection and create a new one with the same name. But I don't know if this is a good way to clear the library.
RESOLVE :
Thanks to Buck Calabro for his solution.
I use the following query to call the CLRLIB in SQL :
CALL QSYS.QCMDEXC('CLRLIB LIB_NAME ASPDEV(ASP_NAME)', 0000000032.00000)
Where LIB_NAME is the name of the library I want to clear, ASP_NAME is the name of the ASP where the library is and 0000000032.00000 is the command lenght.
(note that the term COLLECTION has been deprecated, SCHEMA is the current term)
Since a library can contain both SQL and non-SQL objects, there's no SQL way to delete every possible object type.
Dropping the schema and recreating it might work. But note that if the library is in a job's library list, it will have a lock on it and you will not be able to drop it. Also, unless the library was originally created via CREATE SCHEMA (or CREATE COLLECTION) you're going to end up with differences.
CRTLIB creates an empty library, CREATE SCHEMA creates a library plus objects needed for automatic journaling and a dozen or so SQL system views.
Read Charles' answer - there may be objects in your schema that you want to keep (data areas, programs, display and printer files, etc.) If the problem is to delete all of the tables so you can re-build all of the tables, then look at the various system catalog tables: SYSTABLES, SYSVIEWS, SYSINDEXES, etc. The system catalog 'knows' about all of the SQL tables, indexes, views, stored procedures, triggers and so on. You could read the catalog and issue the appropriate SQL DROP statements.
An assignment I have as part of my pl/sql studies requires me to create a remote database connection and copy down all my tables to it from local, and then also copy my other objects that reference data, so my views and triggers etc.
The idea is that at the remote end, the views etc should reference the local tables provided the local database is online, and if it is not, then they should reference the tables stored on the remote database.
So I've created a connection, and a script that creates the tables at the remote end.
I've also made a pl/sql block to create all the views and triggers at the remote end, whereby a simple select query is run against the local database to check if it is online, if it is online then a series of execute immediate statements creates the views etc with reference to table_name#local, and if it isn't online the block skips to the exception section, where a similar series of execute immediate statements creates the same views but referencing the remote tables.
OK so this is where I become unsure.
I have a package that contains a few procedures and a function, and I'm not sure what's the best way to create that at the remote end so that it behaves in a similar way in terms of where it picks up its reference tables from.
Is it simply a case of enclosing the whole package-creating block within an 'execute immediate', in the same way as I did for the views, or should I create two different packages and call them something like pack1 and pack1_remote?
Or is there as I suspect a more efficient method of achieving the goal?
cheers!
This is absolutely not how any reasonable person in the real world would design a system. Suggesting something like what I suggest here in the real world will, in the best case, get you laughed out of the room.
The least insane approach I could envision would be to have two different schemas. Schema 1 would own the tables. Schema 2 would own the code. At install time, create synonyms for every object that schema 2 needs to reference. If the remote database is available when the code is installed, create synonyms that refer to objects in the remote database. Otherwise, create synonyms that refer to objects in the local database. That lets you create a single set of objects without using dynamic SQL by creating an extra layer of indirection between your code and your tables.
In my code I am trying to check if my entity framework Code First model and Sql Azure database are in sync by using the "mycontext.Database.CompatibleWithModel(true)". However when there is an incompatibility this line falls over with the following exception.
"The model backing the 'MyContext' context has changed since the database was created. Either manually delete/update the database, or call Database.SetInitializer with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges strategy will automatically delete and recreate the database, and optionally seed it with new data."
This seems to defeat the purpose of the check as the very check itself is falling over as a result of the incompatibility.
For various reasons I don't want to use the Database.SetInitializer approach.
Any suggestions?
Is this a particular Sql Azure problem?
Thanks
Martin
Please check out the ScottGu blog below:
http://weblogs.asp.net/scottgu/archive/2010/08/03/using-ef-code-first-with-an-existing-database.aspx
Here is what is going on and what to do about it:
When a model is first created, we run a DatabaseInitializer to do things like create the database if it's not there or add seed data. The default DatabaseInitializer tries to compare the database schema needed to use the model with a hash of the schema stored in an EdmMetadata table that is created with a database (when Code First is the one creating the database). Existing databases won’t have the EdmMetadata table and so won’t have the hash…and the implementation today will throw if that table is missing. We'll work on changing this behavior before we ship the fial version since it is the default. Until then, existing databases do not generally need any database initializer so it can be turned off for your context type by calling:
Database.SetInitializer<Production>(null);
Using above code you are no recreating the database instead using the existing one so I don't think using Database.SetInitializer is a concern unless you have some serious thoughts about using it.
More info: Entity Framework Code Only error: the model backing the context has changed since the database was created
I have few SQL scripts which setups the db of an app. have few scripts creating packages which has few references of views, and similarly have scripts for creating views which has references of packages.
Is there a way to separate these two scripts which would then individually create only packages or views respectively.
Or any alternate to work on this.
First, create the package specifications.
Second, create the views -- they reference the specification, not the body.
Third, create the package bodies -- they reference the views.
You could just create all your views first using the syntax
CREATE OR REPLACE FORCE VIEW
which creates a view even if the referenced objects don't exist yet, then create all your package specs, then the bodies.
Now you could compile all invalid objects or just let Oracle take care of it (see this link)
Ask Tom - "invalid objects"
I think you have to calculate the reference graph manually and then order the execution of the scripts accordingly.
So you need to create a set of scripts views1.sql, views2.sql, ... and packages1.sql, packages2.sql, ...
Views1.sql contains only views that are not referencing any packages.
Packages1.sql contains only packages that are not referencing any views.
Views2.sql contains only views that are referencing packages from packages1.sql.
Packages2.sql contains only packages that are referencing views from views1.sql.
And so on until you are finished.