Oracle repository - sql

I have repository connected with the integrated source control in oracle. I did full export without any data from tables of my database as separate directories and it looks fine. The problem is that every time I want to change something I have export the stored procedure or table sql code and then to upload it in the repository, which is hard because at the end of the day I'm not sure about how many changes I did and I can forgot some of them. Full export without data could have been the solution but I don't have the time to wait for export 20-25 minutes at the end of every day. Is there any way to just export the changes which are made at the current day or made after the last export. Or maybe directly export the sql code on each compilation inside oracle management studio? The database is not on my computer its located in a server which I'm connected to.
here is how my git folder looks like in separate folders

You need to work the other way around. To change a package for example, open the corresponding source code file from your Git repository in your development tool (SQL Developer, PL/SQL Developer etc), make your changes, test, save the file, check in with ticket number and comment. As a rule you should not edit stored code directly in the database. (PL/SQL Developer has a checkbox "Allow editing of database source", which I generally leave unchecked. Probably other tools have something similar.)

Related

Some useful functions of MySQL Workbench for SQL Server management studio

Our project is moving from MySQL to MS SQL and after a long time working with MySQL Workbench I really miss some features in SQL Server Management studio (2014).
Do you know whether they exist in SSMS or there is an alternative/replacement application for SSMS to work with database?
Functions are listed below:
Generate update data script to review and to be able to copy-paste it. Do not update data when I move to another row when the table is opened for editing.
Some changes are still made in database in our project, and sometimes it's easier to add some rows manually in 5 tables, get the script, test it and run the script at production environment. I don't want to write a script for each update and I don't want to make a mistake when copying data to production server using edit table option.
Review update table script BEFORE the changes were made, not after (I am talking about Tools - Options - Designer - Auto generate change scripts).
Upload a file using select file dialog into a binary field.
Again, I know about using OPENROWSET function, just interested how to do it as I used to.
Ability to view large text fields in a convenient way in SSMS. Now I have to copy data from a field and paste it into notepad. (For example, error message with a long trace log)
Save a few tabs with some useful scripts and open all of them when I open SSMS.
Is there any way to organize tabs to be able to work with 10+ tabs more effectively? Now only 6 of them can be shown on the screen (compate it to 15 tabs in MySQL WB).
Simple 'search field' (like Ctrl+F in Excel) to be able to search data in all fields displayed on the screen.
I would appreciate any ideas.
Thank you.

how to display the result of a procedure running from the server side in oracle?

I am trying to automate a daily monitoring activity where there are set of scripts to be executed(all are select statements). I am in the process of creating a procedure which runs these scripts and by means of scheduler, this will be running daily once. My problem is since these activities are taking place in server side(server backbone), How do i save the results? Earlier we will run all the scripts manually and save it in a notepad. Is there any option to do the same in automation? Like saving in our PC or SQL developer? Instead of logging in to server and searching the path where the file is saved? I thought of saving the results in a table but i am looking for a better option.Please suggest...
Generally it is a good idea to save the results in a table as this gives you flexibility when querying the results or exporting them in multiple formats.
There are multiple options to get the data to the client:
Query the table with the results from the client
Generate a HTML from the results table and have make it accessible from a HTTP server.
You can also create a web PL/SQL package and generate the HTML within (http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_web.htm#i1006207)
Export the data from the results table to a file and put it in a shared directory that is accessible by the client.
Email the results from the PL/SQL package.
I thought of saving the results in a table but i am looking for a better option.
What is exactly the issue with the "table" option?
Regarding "saving in our PC or SQL developer": one problem with that is that a PC/app screen is:
a PC is usually less resilient to reboots, crashes, etc.
it's intended for private use. Unless you're working alone - these logs may be of interest to other people;
..
Other options: it can be made to send e-mail; copy the file to a well known place (incl. one which is directly mounted on your PC); write to database table (as already suggested); and more.

How to deploy SQL script to clients

Our company is in the process of adapting TFS for source repository and project management. I am in charge of database part of the project. We are using SQL Server 2008 R2, Visual Studio 2012 and TFS Online. We have a database that is used by several of our applications. So far I have been the only one handling any change to this database. As the company is expending we are going to have multiple dev teams. So I am planning to save the database as as SSDT project to TFS.
At the moment I am maintaining my database like the following:
I have separate folders for UDFs, Stored Procedures, and Config.
Under these folders I have subfolders for each objects. For example, for stored procedures I have subfolders for each stored procedure which contains the SQL script to create the SP. The config folder contains any script similar to SSDT's post deployment script (for example, populating static data).
The SQL script contains code to drop the procedure and create it.
I have a c# app to concatenate all the SQL files into one single SQL file. Let's call it the FINAL script. When creating FINAL script I can specify version number which adds an update statement to update the version table on the database.
FINAL script is made available for customers to download and execute on the database. So the script mainly contains any add/edit to SPs, UDFs, and static data. It does not touch any existing data (data entered by user) in most cases.
As a newbie to TFS and SSDT I am not exactly sure how this can be done using SSDT/TFS or if there is better way of doing something similar. So far what I have understood about SSDT and TFS is:
I can import an existing database to SSDT project.
This will create scripts for all objects including tables.
I can easily do a publish of the database to a local server or to a server I have access to.
Things that seem confusing so far:
How do I supply clients with my latest update script? I am thinking of manually including the FINAL script to the SSDT project but there must be better way of doing it.
How do I publish the changes to a copy of the database without the loss of any user-entered data? My guess is when publishing the tables get created. I can take care of the static data but I am not sure how to handle data entered by users.
May be there is something fundamentally wrong in my understanding of this whole thing. That is why I am here... :)
You want to pull your DB into a SQL Project. Maintain all of your changes there. This tells your system what the schema of your database should be. From there, I'd generate the dacpac files (through building the project) and provide those to your clients along with having them install the SSDT tools that include SQLPackage. They can run SQLPackage to make changes to their database to handle the schema changes automatically. This will bring their database in line with your schema, no matter how far off it might be.
I'd also create a publish profile for them to use. This lets you control some of the settings.
You can choose to not drop any objects not in your project
You can choose to ignore users/permissions
You can set an option to not allow changes if there would be data loss.
You can wrap everything in a transaction so a failed update rolls back
If you give them a batch file to run, you can specify an output file or a Diff report, or have them generate their own script to do the update.
I blogged about this at http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
(or http://schottsql.blogspot.com/search/label/SSDT if that doesn't work well). That will take you through some basics of why you might want to use SQL Projects, creating them, maintaining them, and publishing the changes to an existing database.

Backup only new or edited records

I have built a SQL Server Express database that is going to be housed on an external hd. I need to be able to add/update data on the database that is on my system, as well as other systems and then only backup or transfer data that has been added or edited to the external hard drive. What is the best way to accomplish this?
You would probably use replication for this but as you're using SQL Server express this isn't an option.
You'll need some sort of mechanism to determine what has changed between backups. So each table will need a timestamp or last updated date time column that's updated every time a record is inserted or updated. It's probably easier to update this column from a trigger rather than from your application.
Once you know which records are inserted or updated then it's just a matter of searching for these from the last time the action was performed.
An alternative is to add a bit column which is updated but this seems less flexible.
Sherry, please explain the application and what the rationale is for your design. The database does not have any mechanism to do this. You'll have to track changes yourself, and then do whatever you need to do. SQL Server 2008 has a change tracking feature built in, but I don't think that will help you with Express.
Also, take a look at the Sync Framework. Adding this into your platform is a major payload, but if keeping data in sync is one of the main objectives of your app, it may pay off for you.
In an application
If you are doing this from an application, every time a row is updated or inserted - modify a bit/bool column called dirty and set to true. When you select the rows to be exported, then select only columns that have dirty set to true. After exporting, set all dirty columns to false.
Outside an application
DTS Wizard
If you are doing this outside of an application, then run this at the Command-Line:
Run "C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTSWizard.exe"
This article explains how to get the DTS Wizard (it is not included as default).
It is included in the SQL Server
Express Edition Toolkit – and only
that. It you have installed another
version of SSE, it works fine to
install this package afterwards
without uninstalling the others. Get
it here:
http://go.microsoft.com/fwlink/?LinkId=65111
The DTS Wizard is included in the
option “Business Intelligence
Development Studio” so be sure to
select that for install
If you have installed another version
of SSE, the installer might report
that there is nothing to install.
Override this by checking the checkbox
that displays the version number (in
the installer wizard)
After install has finished, the DTS
Wizard is available at
c:\\Microsoft SQL
Server\90\DTS\Binn\dtswizard.exe you
might want to make a shortcut, or even
include it on the tools menu of SQL
Studio.
bcp Utility
The bcp utility bulk copies data between an instance of Microsoft SQL Server and a data > file in a user-specified format. The bcp utility can be used to import large numbers of > new rows into SQL Server tables or to export data out of tables into data files. Except > when used with the queryout option, the utility requires no knowledge of Transact-SQL.
To import data into a table, you must either use a format file created for that table or > understand the structure of the table and the types of data that are valid for its
columns.

Creating a CHANGE script in Management Studio?

I was wondering if there is a way to automatically append to a script file all the changes I am making to my columns, tables, relationships etc...
The thing is I am doing a lot of different changes on a TEST db and the idea will be to apply this change script when I move the test db to production... hence keeping production data but applying all schema and object changes.
Is there an easy way to do this? Can it also migrate database diagram changes?
I have seen how you can create a change script each time I do a change but this means I have to copy and paste into a master file. Actually pretty easy!
I was just wondering if I was missing something?
Do not make changes to the test server using the UI. Write scripts and keep them under source control. You can test your scripts starting from backups of the live data and you can tune yoru scripts untill they achieve the desired result. Then you can check in the scripts for reference and later apply them on the live server. See this article Version Control and Your Database.
BTW, check out the SSMS toolpack, I think it may do what you want (I'm not sure). My advice stand none the less: version your schema, use explicitly created/saved scripts, use source control.
There's no way to directly generate a "delta" script in SSMS.
However, if every time you publish changes, you script out the entire database, including data, to SQL using the SQL Server Database Publishing Wizard you should be able to extract diffs between the versions and get your deltas that way.
If money is no object, you can purchase Visual Studio Team System Database Architect edition and use its fantastic database comparison tools to generate and version control exactly the diffs you want.
Try using TableDiff , that came with SQL Server 2005.
SQL Server 2005 TableDiff Utility
tablediff Utility
We have the process where when a developer gets done with a change, they then script it out and check it into Subversion. In Subversion we have a folder for Tables, Stored Procs, Data, etc. They script it out so it is repeatable (i.e. don’t insert the new data if it is already there.) This is important to do anyway so you keep the history of changes for a given object in the database.
In the past, we would just enter each of the files that we wanted scripted out into a text file (i.e. FileListV102.txt). When we were ready to make a release we would do “get latest” on all of the files (from VSS back then.) We then had a simple utility that would read the “file list” file and open each of those files in turn concatenating them into an output file. That is pretty easy to code.
We outgrew that and now we have a release management tools (which can be found here and will be on sale mid September), that takes all of the files and creates a big SQL script file out of it. It does it in the order that you would expect based on the folder names – so files found in the "Tables" folder are done before those in the "Data" folder, etc.
Either way, once you are done you have a big SQL script file that you can then apply to a fresh copy of production and that is what you test against.
I know I'm way late to the party, but I just wanted to add that there are tens of third party products out there. Some are very good, some are very cheap or free, and some are a mixture. I listed 22 here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
We have been using a relatively new software called Kal Admin.
It has Change Management feature and let distributing selected changes to other databases very easily. We used to do it by comparing two databases but it not satisfy our need for change tracking.
BTW Kal Admin has Metadata and data compare capabilities as well.