I am part of a software development company looking for a good way to update my SQL Server tables when I put out a new version of the software. I know the answer is to probably use scripts in one form or another.
I am considering writing my own .NET program that runs the scripts to make it a bit easier and more user-friendly. I was wondering if there are any tools out there along those lines. Any input would be appreciated.
Suggest you look at Red_gate's SQlCompare
What kind of product are you using for your software installation? Products like InstallShield often now include SQL steps as an option for part of your install script.
Otherwise, you could look at using isql/osql to run your script from the command line through a batch file.
One of the developers where I'm currently consulting wrote a rather nifty SQL installer. I'll ask him when he gets in how he went about it.
I am using Red Gate's SQL Compare all the time. Also you need to make sure to provide a rollback script in case you need to go back to the previous version.
Have a look at DB Ghost Packager Plus.
Packages your source database and the compare and sync engine into a simple EXE for deployment. The installer EXE will automatically update any target schema to match the source on-the-fly at installation time.
Red Gate's SQL Compare to generate the change script, and Red Gate's Multi Script to easily send it to multiple SQL databases at the same time.
Related
I have a question regarding Automation.In my project we have 35 SQL scripts with same logic,same scripts for all those, with only 4 parameters different how can i automate these in TOAD for Oracle?
Depending on your version of Toad, there could be a 'Automation Designer' under the 'Utilities' menu item. This will allow you to run scripts automatically, based on a bit of logic. It also supports running with parameters.
The tool 'Toad for Data Analysts' can also be used to model automated scripts, and run them with specific parameters.
If you have any of these tools available, I would suggest giving them a try, or at least read up on their documentation. If you don't have access to these, let me know so I can try and think of a different solution.
We are building a SSIS package for a customer where a lot of conversion checks happen in 1 dataflow task. We output any errors to a script that generates a new record for our Error-table. That error-table could then be consulted to check if any errors occurred.
We are aware that there are a few scripts and components out there to retrieve the real columnname from an error-output. But those are all for 2008 R2 or lower.
Secondly, we compared our 2012 xml with the 2008 R2, 2008 xml and there isn't any sign anymore of a lineageID, so we think that the scripts and components above will not function anymore.
Weird thing is though that in our designer in the advanced edit screen, we can see our lineageId's. Sadly, we fear that those are generated on runtime and can be different every run. So we can't script against those to retrieve the real columnname.
Anybody has any tips or tricks to resolve this?
Kind regards,
Tom
Benny Austin's solution
This provided me with an answer. Do read the comments about the package though, you might need to fiddle somethings before it works. But eventually it does.
I have a series of SQL files, and I want to compile all of them. If do them manually by using benthic software, it takes a long time. I tried using TOAD, but I don't like to use cracked software. Can you help me execute a SQL file? I want to write a program to do some things for me.
From sql/plus I tried to create batch file but in some of my sql file developer used "/" and ";" so it caused sql/plus suddenly stopping the compilation. Please advise or recommend free software to help.
"I want apply the sql package , function and if they are invalid compile them again"
I am using oracle 10g.
Thanks
If you search for something like TOAD, try SQL Developer, a free tool from Oracle.
If you want to recompile existing source in your database, you can use dbms_utility.compile_schema.
If you try to install several files in a batch, SQL*Plus should work for you. Make sure that your files have the right format.
It sounds like you need to run a large sql script. Correct? Sql/Plus should work, but if you want a free IDE, I recommend SQL Developer. It isn't perfect, but it is one of the better free solutions.
"in some of my sql file developer used "/" and ";" "
You need to consistently use these to have any hope of using a tool to deploy. You don't want to have to use a GUI to deploy, so SQL*Plus is the standard to aim for. A good Oracle GUI should be able to run a SQL*Plus deployment script.
I generally start with SET DEFINE OFF otherwise unexpected ampersands (&) cause issues.
Do some basic grepping - any script with a CREATE PACKAGE, CREATE PROCEDURE, CREATE TRIGGER or CREATE TYPE (including CREATE OR REPLACE) should have a the "/" to execute that statement. If they don't, fix them.
For a first run though, I'd take them in batches of 10, until I was sure that they executed correctly. I wouldn't worry about compilation errors as long as they load. Afterwards, you can do a recompile of any invalid objects in the schema.
Then check USER_ERRORS to see what is still broken.
First the question, then some background.
We're using Visual Studio 2008, C# 3.0 and .NET 3.5, and TFS 2008 as our VCS.
If I execute this command against our TFS database, to show information about a merge commit:
tf changeset 13469 /noprompt
I get output like this (redacted):
Changeset: 13469
User: Lasse
Date: 12. november 2010 14:06:06
Comment:
Some text here.
Items:
merge, edit $/path/to/target/filename.txt
... more merged files
... some blurb about reviewer texts, etc. nothing important/useful here
This was merged from a different path in the same database, but this information is not available here.
For instance, if I merged from $/path/to/main/ down to $/path/to/branch/, the path to the main project is not available in the merge changeset. (note, please don't say that I'm merging the wrong way, it doesn't matter in this case so I just made it simple.)
So, the question is this: Is there any way I can find out where that changeset was merged from? Which branch it came from? ... and which changeset it originated as in that branch (like 13468? 13462? 13453? ...)
Background
We haven't used much branching and merging so far, except for simple stuff like "tagging" a release.
From now on we're looking at using branching much more active, but this creates a challenge.
Let's say I open up our bug tracker, take the topmost bug, fixes it, and checks it in. This is done in one branch, let's say this is the master branch.
Now, at some point, a tester is going to verify that the hotfix we're going to release has this bug fixed, so he opens up our product and wants to verify before he starts that the bugfix has actually gone into this build.
When we didn't use branching, we simply took the changeset number of the commit that ultimately fixed a case and typed that into the case itself. Additionally, our product was built with a build-number (4th part of version number) identical to the changeset that was the latest changeset that became part of the build.
This way, the tester could simply look at the case, the version number and easily deduce if the build had that changeset or not. If the changeset number in the version number was equal to or higher than the one in the case, the changeset was part of that build.
With branches, that doesn't work. If I commit changeset X on the master branch, but forget to merge, the tester can't simply say "If I run version X or higher, I go that fix" any more.
Note that we're not using TFS work items, so there's no easy built-in way to link commits and cases.
The reason I asked about the TFS history output was that I assume that if I can see that changeset 13469 really came from another branch, and corresponds to changeset 13462 there, and the programmer has noted 13462 on the case, I can say "13462 is now part of the build, because it was merged to the right branch, became 13469, and the build output has version 13470."
In other words, I could build a tool that as part of the build looked at the history of the database and grabbed all the necessary information and stored it in a database, so that I could take cases on our ready-to-test list and compare against the version number of the executable the tester was running, and just list all cases that is both ready to test and part of that build.
So my question is really this: Does anyone have any hints to how we can solve this? Perhaps we're boneheaded and needs to be told the right way to do this, so if you got any good ideas, let me know.
I hear and feel your lament here, as we've run into the same limitation. With TFS 2008, there's no easy way to see that history. With TFS 2010, and the branch visualizer, it gets easier.
If this is something you really need, you could potentially write it yourself using the TFS API. You would have to walk your way back through the various changesets for the files. It would be relatively straightforward to code:
Get merge changeset
Get prior merge changeset
Determine merge source from the first changeset
Get history for the file between the dates of the two changesets.
I've done this manually before, but you could either do this in C# code, or, alternatively, write a PowerShell script to do this.
As part of a release we run a load of PL/SQL scripts against a database. Recently someone left the ; off the end of a line in one script that was called another script so this meant that script did not get run. Because this did not cause an error, it just didn't get run, it took quite a while to track down what had happened.
I want to check the scripts before they are run for lines in them that are missing either a ; at the end or a / on the line after. This is made more complicated as 'lines' in the script could actually span more than one line if it is statement or block of code.
To me this seems like to do this I'm going to have to parse the scripts then check they meet the above.
I've found ANTLR and wonder if this might be a way to do it since there seem to be existing PL/SQL grammars but looks like that's going to be a step learning curve for what's just a simple check.
Does anyone know an easy way or any other tools, eclipse plugins etc that I can use to check for lines in the scripts that are missing either a ; at the end or a / on the line after?
Update
We already do most of the stuff Tom H suggested. The scripts are run into our test server and we have a version table that gets updated at the end. The problem was that the missing semi-colon in the container script meant one script did not get run but the rest including the one to update the version number ran without errors. Therefore the problem only got picked up quite a way into testing. This needed the database restored before running the scripts with the missing semi-colon added so basically resulted in half a day of testing time being lost. If there was a simple way to check this before running the scripts into the test server it could save quite a bit of time.
I agree with MattH that you may be going about this the wrong way. I would just add an insert statement to the end of all of your scripts which insert a "version" row into a table in the database. At the end of your deployment scripts it's then an easy task to check that the version table has all of the correct rows in it.
Also, you should have all of your release scripts being run exactly as they will be in production against your QA server. That's where all of the testing takes place. You never do anything to the server besides what is in your release steps - you only run the release scripts and if those release scripts are ever changed then you refresh the QA server with them and redo testing.
When you go to production your release process has then been fully tested. As a fail safe measure you can also use tools like Red Gate's SQL Compare and SQL Data Compare to check that production matches the QA server. The data compare would only be against certain tables (look-up tables, etc.). If you have data changes to major tables (1M rows, etc.) then you can right a custom script to check that they are correct.
Even if the scripts are different for every release (and not part of a defined source control structure that creates or replaces database objects) I would adopt a practice of breaking the scripts down into the most fundamental units of work per file and deploying them through Ant with the standard sql task. You probably have these types of scripts:
CREATE or REPLACE dbobject...
SQL DML scripts
Anonymous PL/SQL blocks
If you standardize on a consistent statement delimiter (I suggest using "/" since it works with all of the cases above) and set the deployment to fail on error, then Ant will either deploy all of the files or indicate why it couldn't.
I think it would be very difficult to otherwise parse files of one or more SQL and/or PLSQL statements and find missing delimiters if there are no standards on delimiter choice or statements per file.
Just a thought, but are you going about this the wrong way?
I assume, at the file-level, the lack of a semi colon in the file was not a problem? but it only became a problem when run via the batch processing? If that's the case maybe you can change your batch processing to cope with this.
If it was the file, then testing should have picked it up. You don't want to parse your input files to make sure they compile etc.