When openerp modules are uninstalled the associated tables in the postgres database still exist. Is there anyway to synchronize the OpenERP modules with the ORM model to remove those tables and assigned user to it only. B/c it gives no sense to keep the tables without the application which access it.
You could write a routine that starts with the ir_model table and look for tables in the schema that don't have a matching entry in ir_model. You will have to be careful of transient (osv_memory in version 6 parlance) tables and also models with _auto = False which is usually done to create database views.
Related
We have a SQL server database that is very dynamic and is always creating new and dropping existing tables from a custom schema called 'temp' (we have a dbo schema and a temp schema). We also use SSDT to maintain and monitor changes in our schema but we are unable to use the update feature on a schema comparison because if a new table is created (say temp.MyTable) after the schema comparison is made and before the updated is attempted, SSDT invalidates the schema comparison because something has changed. At the moment, our only solution to this is to run the schema comparisons around midnight when system activity is practically non-existent but is not ideal for the person who has to do the schema comparison.
My question is, is there a way we can exlude tables from the schema comparison that are apart of the 'temp.' schema?
How are you doing the deployment? as I test I used sqlpackage.exe to publish a dacpac and sat there constantly creating new tables and it deployed without complaining.
However, there are a couple of things you can do, the first is to stop getting the deployment to stop when drift is detected:
/p:BlockWhenDriftDetected=False
This is set to true by default.
The second thing is to ignore the temp schema, but I don't think this will help unless you also stop the drift but you might want to use this filter to stop all changes to the temp schema:
http://agilesqlclub.codeplex.com/
Ed
In a SQL database, I can run a query to present information as it exists, and I can create new compilations of data that did not previously exist.
For instance, SELECT * FROM Table1 would return information that already existed, while a series of nested joins and WHERE statements could present data in ways that didn't exist before the query was run.
My question is whether the database's information schema -- assuming it's never been pulled up before -- falls into the first category or the second.
Information schema views query already existent system tables in database. You can control yourself as sys.tables etc which are called catalog views in Sql server.
Therefore using these views falls to second type of usage in your question. Using existent data in a different way.
Everything in INFORMATION_SCHEMA is just a view on the system tables. So the answer to your question is both that the data has always been there (because every object in the database has one or more rows in system tables somewhere representing it) and also that it's generated for your viewing pleasure upon querying (to present it in the form that INFORMATION_SCHEMA requires).
Note that even what we normally call "the system tables" (sys.tables and related) are also just views on the real, actual, physical system tables, which are not accessible to any user but only to the database engine itself -- viewing those directly requires a direct administrator connection and tweaking some flags, and is typically not something done by anyone other than SQL Server developers.
As to what this implies in a FOIA context is probably best answered in a legal setting, not an information-theoretical one.
I have a Postgres database with some schemas (all have the same structure), I want to know if there is the possibility to change the structure (Table names, new columns etc) for all the schemas in the same database. Is it possible or what's the purpose of the schemas in a database?
Thanks.
I'm going to focus on the second half of your question, because I think it'll answer the first half (and I'm not sure I understand the first half).
what's the purpose of the schemas in a database?
This confused me when I first switched from MySQL to PostgreSQL. A Postgres schema is essentially the same as a MySQL database. In fact, according to the MySQL Reference Manual:
In MySQL, physically, a schema is synonymous with a database.
That begs the question of what is a PostgreSQL database, then? From the PostgreSQL Documentation:
More accurately, a database is a collection of schemas and the schemas contain the tables, functions, etc. So the full hierarchy is: server, database, schema, table (or some other kind of object, such as a function).
So a PostgreSQL database is essentially a collection of schemas? Seems kind of pointless, why do we need that step in the hierarchy? Let's take a look at the docs for a PostgreSQL schema:
A PostgreSQL database cluster contains one or more named databases. Users and groups of users are shared across the entire cluster, but no other data is shared across databases. Any given client connection to the server can access only the data in a single database, the one specified in the connection request.
A database contains one or more named schemas, which in turn contain tables. Schemas also contain other kinds of named objects, including data types, functions, and operators. The same object name can be used in different schemas without conflict; for example, both schema1 and myschema can contain tables named mytable. Unlike databases, schemas are not rigidly separated: a user can access objects in any of the schemas in the database he is connected to, if he has privileges to do so.
So, in PostgreSQL, a schema contains tables, functions, etc. And a database manages user/group connectivity and access/roles to specific clusters of schemas. Typically, I work under one database and have information broken into schemas to segment information.
How can I sync two databases and do a manual refresh on the entities on either of the database whenever I want?
Let's say I have two databases DB1(prod) and DB2(dev). I want to update/insert only a few tables from prod DB to dev DB. How could I achieve this? Is this possible instead of DBlink since I do not have privileges to create a database link?
If you only want to do a manual refresh set up an import/export/datapump script to copy the data across if there is not too much data involved. If there is a large amount of data you could write some pl/sql as described above to only move the new/changed rows. This will be easier if your data has fields such as created/updated_on
We're developing a Doctrine backed website using YAML to define our schema. Our schema changes regularly (including fk relations) so we need to do a lot of:
Doctrine::generateModelsFromYaml(APPPATH . 'models/yaml', APPPATH . 'models', array('generateTableClasses' => true));
Doctrine::dropDatabases();
Doctrine::createDatabases();
Doctrine::createTablesFromModels();
We would like to keep existing data and store it back in the re-created database. So I copy the data into a temporary database before the main db is dropped.
How do I get the data from the "old-scheme DB copy" to the "new-scheme DB"? (the new scheme only contains NEW columns, NO COLUMNS ARE REMOVED)
NOTE:
This obviously doesn't work because the column count doesn't match.
SELECT * FROM copy.Table INTO newscheme.Table
This obviously does work, however this is consuming too much time to write for every table:
SELECT old.col, old.col2, old.col3,'somenewdefaultvalue' FROM copy.Table as old INTO newscheme.Table
Have you looked into Migrations? They allow you to alter your database schema in programmatical way. WIthout losing data (unless you remove colums, of course)
How about writing a script (using the Doctrine classes for example) which parses the yaml schema files (both the previous version and the "next" version) and generates the sql scripts to run? It would be a one-time job and not require that much work. The benefit of generating manual migration scripts is that you can easily store them in the version control system and replay version steps later on. If that's not something you need, you can just gather up changes in the code and do it directly through the database driver.
Of course, the more fancy your schema changes becomes, the harder the maintenance will get i.e. column name changes, null to not null etc.