Is there any way to retrieve older version of changed view on pl/sql oracle developer?
I really dont know where to start atm. Its not materialiazed view
A view is just a stored query. If you replaced it with a new version, the old one is lost.
A few options:
check your version control system
restore it from backup
either of filesystem files or
database backup, be it RMAN or even .dmp file (as result of data pump export)
if you dropped it, see whether you can get it by flashback query on DBA_VIEWS
is it in Recycle bin?
If nothing of this help, huh, you're probably out of luck.
The database doesn't keep a history of object definitions.
However, any good development process will involve a change control and versioning system. Hopefully your database objects are being tracked as files in a Git repo somewhere.
If not, your DBA could probably get the previous version of your VIEW from a backup, or by mining the redo/archive logs.
Related
As I'm about to implement it myself, I'm curious to know how people handle
incremental backups for their DB's.
The straight-forward way, as I see it, is to shutdown couch and use a tool
like rsync or duplicity to backup db files. It should do the job well, and
as an added bonus, it could also be used to backup views.
Does anyone know if a similar backup could be done while the couch is still
on (and the db is being updated)?
Does anyone do incremental backups in couchdb2.0?
For incremental backup, you can query the changes feed of a database using the "since" parameter, passing the latest revision from your last backup and then copy only the changes into a new database on the same or different server. AFAIK, there is no "since" parameter for replication, so you will need to roll your own framework for this.
I'm using SQL 2005. I can right click on a database and create scripts for the database that will recreate the structure (tables, views, stored procedures) elsewhere. Or just as a backup, version, etc.
But, is there a way I can schedule it to do this? And output to a folder I choose?
I really appreciate the help.
Don
You could schedule this using SMO probably, though it may take some work to get up and running.
However, a more elegant approach might be to schedule a full backup to a new file (with today's timestamp), and archive it. This way retrieving the scripts is as simple as restoring that version of the database somewhere, and extracting manually.
An even better approach: if you store your change scripts in source control, you should always be able to pull any version of the database.
I've used both SMO's predecessor (SQL-DMO) from VB as well as ApexSQLScript from the command line to do scheduled scripting of objects.
This is fine for very large databases where you do not have ability to quickly restore a database just to look at schema versioning information for small tables/views/procs which happen to live in the same database.
In fact, this is a good argument for separating out small fast-changing schemas into separate databases from large-slowly changing schemas.
here's a more general question on how you handle database schema changes in a development team.
We are a team of developers and the databases used during development are running locally on everyone's box as we want to avoid the requirement to have web access all the time. So running a single central database instance somewhere is not a real option.
Whenever one of us decides that it is time to extend/change the db schema, we mail database files (MYI/MYD) or SQL files to execute around, or give others instructions on the phone what they need to do to get the changed code running on their local DBs. That's not the perfect approach for sure. The same problem arises when we need to adjust the DB schema on staging or production once a new release is ready.
I was wondering ... how do you guys handle this kind of stuff? For source code, we use SVN.
Really appreciate your input!
Thanks,
Michael
One approach we've used in the past is to script the entire DDL for the database, along with any test/setup data needed. Store that in SVN, then when there's a change, any developer can pull down the changes, drop the database, and rebuild it from the script files.
At the very least you should have the scripts of all the objects in the database (tables, stored procedures, etc) under source control.
I don't think mailing schema changes is a real option for a professional development team.
We had a system on one of my previous teams that was the best I've encountered for dealing with this situation.
The nightly build of the application included a build of a database (SQL Server). The database got built to the Test DB server. Each developer then had a DTS package (this was a while ago, and I'm sure they upgraded to SSIS packages) to pull down that nightly DB build to their local DB environment.
This kept the master copy in one location and put the onus on the developers to keep their local dev databases fresh.
At my work, we deal with pretty large databases that are time-consuming to generate, so for us, starting from scratch with a new DB isn't ideal. Like Harper, we have our DDL in SVN. Additionally, we store a version number in a database table. Every check-in that changes the DB must be accompanied by a script that:
Will upgrade the database schema and modify any existing data appropriately, and
Will update the version number in the database.
Further, we number the scripts and database versions such that a script we've written knows how to upgrade further along a branch or from an older branch to a newer one without any input from the developer (apart from the database name and the directory to the upgrade scripts).
Thus, if I've got a copy of a customer's 4GB DB that's from a year old version and I want to test how their data will work with the version we cut yesterday, I can just run our script and let it handle the upgrades rather than having to start from scratch and redo every INSERT, UPDATE and DELETE performed since the database was created.
We have a non-SQL description of the database schema. When the application starts, it compares the desired database schema with the actual database schema, and performs whatever ADD TABLE, ADD COLUMN, ADD INDEX, etc. statements it needs to do to get the database to look right.
This doesn't handle every case; sometimes you have to delete the database and recreate if if you've changed something that the schema resolver can't handle, but most of the time we don't need to worry about it.
I'd certainly keep the database schema in source code control.
At my present job, every time there's a schema change, we write the SQL for the change (alter table xyz add column ...) and put it in SVN. Then developers can update test databases by running this script. It's pretty clumsy but it works.
At a previous job I wrote some code that at application start-up would automatically compare the actual database schema to what it expected, and if it was not up to date perform the updates. Mostly this was done for deployment reasons: When we shipped new copies of the software, it would then automatically update the user's database. But it was also handy for developers.
I think there should be some generic SQL tool to do this. Maybe there is, but I've never seen one.
What tools do you use to develop Oracle stored procedures, in a team :
To automatically "lock" the current procedure you are working with, so nobody else in the team can make changes to it until you are finished.
To automatically send the changes you make in the stored procedure, in an Oracle database, to a Subversion, CVS, ... repository
Thanks!
I'm not sure if the original poster is still monitoring this, but I'll ask the question anyways.
The original post requested to be able to:
To automatically "lock" the current
procedure you are working with, so
nobody else in the team can make
changes to it until you are finished.
Perhaps the problem here is one of development paradigm more than the inability of a product to "lock" the stored proc. Whenever I hear "I want to lock this so noone else changes it" I immediately get the feeling that people are sharing a schema and everyone is developing in the same space.
If this is the case, why not simply let everyone have their own schema with a copy of the data model? I mean seriously folks, it doesn't "cost" anything to create another schema. That way, each developer can make changes until they're blue in the face without affecting anyone else.
Another trick I've used in the past (on small teams) when it wasn't feasible to let every developer have their own copy of the data because of size, was to have a master schema with all the tables and code in it, with public synonyms pointing to it all. Then, if the developer wants to work on a stored proc, he simply creates it in his schema. That way Oracle name resolution finds that one first instead of the copy in the master schema, allowing them to test their code without affecting anyone else. This does have it's drawbacks, but this was a very specific case where we could live with them. I would NEVER implement something like this in production obviously.
As for the second requirement:
To automatically send the changes you
make in the stored procedure, in an
Oracle database, to a Subversion, CVS,
... repository
I'd be surprised to find tools out there smart enough to do this (perhaps an opportunity :). It would have to connect to your db, query the data dictionary (USER_SOURCE) and pull out the associated text. A tall order for source control systems where are almost universally file based.
Oracle's new SQL Developer has version control built-in.
Here is a link to the product.
http://www.oracle.com/technology/products/database/sql_developer/files/what_is_sqldev.html
http://www.oracle.com/technology/products/database/sql_developer/images/what_version.png http://www.oracle.com/technology/products/database/sql_developer/images/what_version.png
Treat PL/SQL as usual code : store it in files, and manage these files with your revision control tool and your internal procedures.
If you do not already have a revision control tool, then write your requirements down and pick one up. A lot of people it seems use Subversion, associated to TortoiseSVN as a client on Windows (I do).
The thing is : use your tool as is recommended, and adapt your procedures accordingly. For instance, Subversion uses a copy-modify-merge model by default, as opposed to a lock-modify-unlock model which you seem to favor.
In my case, I like to use TortoiseSVN, as stated above. And as is usual with this tool :
I never lock any files. This is very manageable with small teams, and it requires ahead planning on larger ones, which is always a good thing IMHO.
I send my changes manually back to the server, because ... I don't think there's another way with Subversion (plus, internal procedures forbid a commit without a message, which is also a good thing IMHO).
And whatever your choice, I recommend reading this post (and related ones) about database versioning.
A relatively simple (if slightly old-fashioned) solution might be to use a "locking" rather than "merge" mode version control system.... Subversion or CVS generally use a "merge" mode (although I believe Subversion can be made to "lock" files?)
"Locking" mode version control systems do have their own drawbacks of course.....
The only way I can think of doing in in Oracle might be some of of BEFORE CREATE TRIGGER, maybe referencing a table to look-up who can run a package in. Sounds a bit nasty though?
Using Source Control for Oracle you get a lot of what you're looking for.
Stored procedures (as well as packages, functions, tables etc.) can be locked manually using the interface, not automatically, but this does prevent others making changes.
The new SQL to create the object can then be checked into SVN or TFS (no CVS support unfortunately).
The tool is not free but has a free 28-day trial.
Using Oracle SQL Developer 1.5, you can easily create and manage connections to CVS or Subversion. To create a CVS connection (for example), click Versioning -> CVS -> Check out Module. You will run through a wizard to create the connection (host, username, etc), then you can check your procedures/functions out and in as normal.
Integration with CVS is also provided in Toad.
You may also want to look at Aqua Data Studio. They have built in SVN as well and is a great Stored Proc editor.
After searching for a tool to handle version control for Oracle objects with no luck we created the following (not perfect but suitable) solution:
Using dbms_metadata package we create the metadata dump of our Oracle server. We create one file per object, hence the result is not one huge file but a bunch of files. For recognizing deleted object we delete all the files before creating the dump again.
We copy all the files from the server to the client computer.
Using Netbeans we recognize the changes, and commit the changes to the CVS server (or check the diffs...). Any CVS-handler software would work here, but we were already using Netbeans for other purposes. And Netbeans also allows to create an ant task for calling the Oracle process mentioned in step 1, copying the files mention in step 2...
Here is the most imporant query for step 1:
SELECT object_type, object_name,
dbms_metadata.get_ddl(object_type, object_name) object_ddl FROM user_objects
WHERE OBJECT_TYPE in ('INDEX', 'TRIGGER', 'TABLE', 'VIEW', 'PACKAGE',
'FUNCTION', 'PROCEDURE', 'SYNONYM', 'TYPE')
ORDER BY OBJECT_TYPE, OBJECT_NAME
One file per object approach helps to identify the changes. If I add a field to table TTTT (not a real table name of course) then only TABLE_TTTT.SQL file will be modified.
Both step 1 and step 3 are slow processes. (several minutes for a few thousand of files)
Toad also does this without requiring CVS / SVN.
Using Oracle 10g with our testing server what is the most efficient/easy way to backup and restore a database to a static point, assuming that you always want to go back to the given point once a backup has been created.
A sample use case would be the following
install and configure all software
Modify data to the base testing point
take a backup somehow (this is part of the question, how to do this)
do testing
return to step 3 state (restore back to backup point, this is the other half of the question)
Optimally this would be completed through sqlplus or rman or some other scriptable method.
You do not need to take a backup at your base time. Just enable flashback database, create a guaranteed restore point, run your tests and flashback to the previously created restore point.
The steps for this would be:
Startup the instance in mount mode.
startup force mount;
Create the restore point.
create restore point before_test guarantee flashback database;
Open the database.
alter database open;
Run your tests.
Shutdown and mount the instance.
shutdown immediate;
startup mount;
Flashback to the restore point.
flashback database to restore point before_test;
Open the database.
alter database open;
You could use a feature in Oracle called Flashback which allows you to create a restore point, which you can easily jump back to after you've done testing.
Quoted from the site,
Flashback Database is like a 'rewind
button' for your database. It provides
database point in time recovery
without requiring a backup of the
database to first be restored. When
you eliminate the time it takes to
restore a database backup from tape,
database point in time recovery is
fast.
From my experience import/export is probably the way to go. Export creates a logical snapshot of your DB so you won't find it useful for big DBs or exacting performance requirements. However it works great for making snapshots and whatnot to use on a number of machines.
I used it on a rails project to get a prod snapshot that we could swap between developers for integration testing and we did the job within rake scripts. We wrote a small sqlplus script that destroyed the DB then imported the dump file over the top.
Some articles you may want to check:
OraFAQ Cheatsheet
Oracle Wiki
Oracle apparently don't like imp/exp any more in favour of data pump, when we used data pump we needed things we couldn't have (i.e. SYSDBA privileges we couldn't get in a shared environment). So take a look but don't be disheartened if data pump is not your bag, the old imp/exp are still there :)
I can't recommend RMAN for this kind of thing becuase RMAN takes a lot of setup and will need config in the DB (it also has its own catalog DB for backups which is a pain in the proverbial for a bare metal restore).
If you are using a filesystem that supports copy-on-write snapshots, you could set up the database to the state that you want. Then shut down everything and take a filesystem snapshot. Then go about your testing and when you're ready to start over you could roll back the snapshot. This might be simpler than other options, assuming you have a filesystem which supports snapshots.
#Michael Ridley solution is perfectly scriptable, and will work with any version of oracle.
This is exactly what I do, I have a script which runs weekly to
Rollback the file system
Apply production archive logs
Take new "Pre-Data-Masking" FS snapshot
Reset logs
Apply "preproduction" data masking.
Take new "Post-Data-Masking" snapshot (allows rollback to post masked data)
Open database
This allows us to keep our development databases close to our production database.
To do this I use ZFS.
This method can also be used for your applications, or even you entire "environment" (eg, you could "rollback" your entire environment with a single (scripted) command.
If you are running 10g though, the first thing you'd probably want to look into is Flashback, as its built into the database.