AS400 SQL query similar to CLRLIB (clear library) in native AS400 - sql

I'm working on a AS400 database and I need to manipulate library/collection with sql.
I need to recreate something similar to the CLRLIB command but I don't find a good way to do this.
Is there a way to delete all the table from a library with a sql query ?
Maybe I can drop the collection and create a new one with the same name. But I don't know if this is a good way to clear the library.
RESOLVE :
Thanks to Buck Calabro for his solution.
I use the following query to call the CLRLIB in SQL :
CALL QSYS.QCMDEXC('CLRLIB LIB_NAME ASPDEV(ASP_NAME)', 0000000032.00000)
Where LIB_NAME is the name of the library I want to clear, ASP_NAME is the name of the ASP where the library is and 0000000032.00000 is the command lenght.

(note that the term COLLECTION has been deprecated, SCHEMA is the current term)
Since a library can contain both SQL and non-SQL objects, there's no SQL way to delete every possible object type.
Dropping the schema and recreating it might work. But note that if the library is in a job's library list, it will have a lock on it and you will not be able to drop it. Also, unless the library was originally created via CREATE SCHEMA (or CREATE COLLECTION) you're going to end up with differences.
CRTLIB creates an empty library, CREATE SCHEMA creates a library plus objects needed for automatic journaling and a dozen or so SQL system views.

Read Charles' answer - there may be objects in your schema that you want to keep (data areas, programs, display and printer files, etc.) If the problem is to delete all of the tables so you can re-build all of the tables, then look at the various system catalog tables: SYSTABLES, SYSVIEWS, SYSINDEXES, etc. The system catalog 'knows' about all of the SQL tables, indexes, views, stored procedures, triggers and so on. You could read the catalog and issue the appropriate SQL DROP statements.

Related

Flyway repeatable migration for a view can't be dropped

We have a bunch of views in postgres that are created as repeatable migrations by Flyway.
The error that we have encountered is that if we want to rename a column by using CREATE OR REPLACE VIEW, postgres throws an error and cannot do so.
One option is to drop the view first. But this causes a problem if something else depends on the view, which will also throw an error.
Is there any way of dealing with this without having to write complicated scripts to drop any tables/views that depend on this view as that will also require recreating the other views. This process can get very messy and wondering if there is a more elegant solution?
You cannot use CREATE OR REPLACE for it because it was designed to extend column list:
CREATE VIEW
CREATE OR REPLACE VIEW is similar, but if a view of the same name already exists, it is replaced. The new query must generate the same columns that were generated by the existing view query (that is, the same column names in the same order and with the same data types), but it may add additional columns to the end of the list. The calculations giving rise to the output columns may be completely different.
Options:
make changes that are backward compatible i.e. only adding new columns
drop and recreate view(you need to handle the object dependencies)
Flyway is migration-based tool, you could search for "state-based" migration tool for PostgreSQL(SQL Server has SSDT). Related State- or migrations-based database development
6 Version Control tools for PostgreSQL
State-based tools - generate the scripts for database upgrade by comparing database structure to the model (etalon).
Migration-based tools - help/assist creation of migration scripts for moving database from one version to next.

Oracle - copying stored procedures to remote database

An assignment I have as part of my pl/sql studies requires me to create a remote database connection and copy down all my tables to it from local, and then also copy my other objects that reference data, so my views and triggers etc.
The idea is that at the remote end, the views etc should reference the local tables provided the local database is online, and if it is not, then they should reference the tables stored on the remote database.
So I've created a connection, and a script that creates the tables at the remote end.
I've also made a pl/sql block to create all the views and triggers at the remote end, whereby a simple select query is run against the local database to check if it is online, if it is online then a series of execute immediate statements creates the views etc with reference to table_name#local, and if it isn't online the block skips to the exception section, where a similar series of execute immediate statements creates the same views but referencing the remote tables.
OK so this is where I become unsure.
I have a package that contains a few procedures and a function, and I'm not sure what's the best way to create that at the remote end so that it behaves in a similar way in terms of where it picks up its reference tables from.
Is it simply a case of enclosing the whole package-creating block within an 'execute immediate', in the same way as I did for the views, or should I create two different packages and call them something like pack1 and pack1_remote?
Or is there as I suspect a more efficient method of achieving the goal?
cheers!
This is absolutely not how any reasonable person in the real world would design a system. Suggesting something like what I suggest here in the real world will, in the best case, get you laughed out of the room.
The least insane approach I could envision would be to have two different schemas. Schema 1 would own the tables. Schema 2 would own the code. At install time, create synonyms for every object that schema 2 needs to reference. If the remote database is available when the code is installed, create synonyms that refer to objects in the remote database. Otherwise, create synonyms that refer to objects in the local database. That lets you create a single set of objects without using dynamic SQL by creating an extra layer of indirection between your code and your tables.

sql ignore Dependencies while creating database objects

Is their any way to ignore the dependencies while creating the database objects?
for example, i want to create a function on the database that uses the table(s), but i want this function to be created before creating the tables.
I do not see any need for this. Do you see an advantage in doing this?
In oracle, you can create a function or package or procedure without the dependent objects present in the database but when you compile it says compiled with error and if you have to use them, then you have to re-compile the object again.

generate schema from stored procedures

Is there a way to make a schema diagram from an SQL Server database using the stored procedures of this database?
I don't mind if I must use an external software.
You could try playing around with CodeSmith Generator. It's SchemaExplorer Schema Discovery API allows you to programmatically access database elements for a given database and do something creative with it. However, it will still be logically hard to reverse-engineer a schema/diagram this way.
You can build a SQLCLR procedure which uses the Scripter Class from the SMO library.
update: more info on the question reveals the idea is to generate a table schema with dependencies based on the content of the stored procedures.
The approach would be to generate the table structure from the information_schema views and then parse the contents of the syscomments table to figure out the relations. This will always be approximate as it is very hard to establish the one-to-many relationships purely from the SQL Statements. I think you can make a guess based on the field which is referenced more.
If you can't see the tables then you can not generate the schema.
That is, you can't if you have permissions on stored procedures only.
At least two reasons:
the stored proc may JOIN and use several tables
you can't see constraints, indexes, keys etc even if you had table names
Basically, you can only:
see what you have permissions on in SSMS etc
see the internals if you have VEIW DEFINITION rights
Edit, after clarification
There is no way to script implied aspects (such as missing foreign keys) of the schema from code

Purging SQL Tables from large DB?

The site I am working on as a student will be redesigned and released in the near future and I have been assigned the task of manually searching through every table in the DB the site uses to find tables we can consider for deletion. I'm doing the search through every HTML files source code in dreamweaver but I was hoping there is an automated way to check my work. Does anyone have any suggestions as to how this is done in the business world?
If you search through the code, you may find SQL that is never used, because the users never choose those options in the application.
Instead, I would suggest that you turn on auditing on the database and log what SQL is actually used. For example in Oracle you would do it like this. Other major database servers have similar capabilities.
From the log data you can identify not only what tables are being used, but their frequency of use. If there are any tables in the schema that do not show up during a week of auditing, or show up only rarely, then you could investigate this in the code using text search tools.
Once you have candidate tables to remove from the database, and approval from your manager, then don't just drop the tables, create them again as an empty table, or put one dummy record in the table with mostly null values (or zero or blank) in the fields, except for name and descriptive fields where you can put something like "DELETED" "Report error DELE to support center", etc. That way, the application won't fail with a hard error, and you have a chance at finding out what users are doing when they end up with these unused tables.
Reverse engineer the DB (Visio, Toad, etc...), document the structure and ask designers of the new site what they need -- then refactor.
I would start by combing through the HTML source for keywords:
SELECT
INSERT
UPDATE
DELETE
...using grep/etc. None of these are HTML entities, and you can't reliably use table names because you could be dealing with views (assuming any exist in the system). Then you have to pour over the statements themselves to determine what is being used.
If [hopefully] functions and/or stored procedures were used in the system, most DBs have a reference feature to check for dependencies.
This would be a good time to create a Design Document on a screen by screen basis, listing the attributes on screen & where the value(s) come from in the database at the table.column level.
Compile your list of tables used, and compare to what's actually in the database.
If the table names are specified in the HTML source (and if that's the only place they are ever specified!), you can do a Search in Files for the name of each table in the DB. If there are a lot of tables, consider using a tool like grep and creating a script that runs grep against the source code base (HTML files plus any others that can reference the table by name) for each table name.
Having said that, I would still follow Damir's advice and take a list of deletion candidates to the data designers for validation.
I'm guessing you don't have any tests in place around the data access or the UI, so there's no way to verify what is and isn't used. Provided that the data access is consistent, scripting will be your best bet. Have it search out the tables/views/stored procedures that are being called and dump those to a file to analyze further. That will at least give you a list of everything that is actually called from some place. As for if those pages are actually used anywhere, that's another story.
Once you have the list of the database elements that are being called, compare that with a list of the user-defined elements that are in the database. That will give you the ones that could potentially be deleted.
All that being said, if the site is being redesigned then a fresh database schema may actually be a better approach. It's usually less intensive to start fresh and import the old data than it is to find dead tables and fields.