I've a list of .sql script files to create Stored Procedures which I'm using the Eclipse DTP to develop. Currently to create/update all these Stored Procedures, I've to open & run
one by one from the Data Perspective.
Is there a way to create a batch file that run the scripts along the lines of
run createSP1.sql
run createSP2.sql
...
run createSPn.sql
and run it in the Eclipse DTP to avail of the DB connection defined there?
why not just create a batch file that merges all of your .sql files together into a single procs.sql file as part of the build process. I don't know what platform you're running on but in Windows you could have a .bat file that does something like this:
type *.sql > proc.sql
then to apply it to the database, why not do it outside Eclipse and connect to the database via the command line. You could bundle this all up as a single batch file that gets the latest version of your stored procedures from source control, merges them into a single file and then applies it to the database.
Part I
As far as I know the developers of Eclipse DTP
have not yet implemented a command line SQL execution
interface through the Eclipse console view.
See the following URL on the eclipse DTP developer forum
http://dev.eclipse.org/newslists/news.eclipse.dtp/msg00304.html
Part II
While the Eclipse DTP people are working on it,
you can use a database specific tool to load
a master SQL file (all SQL proc files
appended together)
There are database specific console
tools that will load your master SQL file
command line.
(ie. SQL*Plus for Oracle, ij for Apache Derby)
Part III
An improvement over DOS batch is using Cygwin bash
or python or perl to merge all of your sql files
together into a master file.
I found that the text processing tools available
in UNIX (awk,sed,cat...) are great for this sort
of thing.
Related
I am using Intellij 14.1.4. I am able to run SQLs by custom defined db data sources in database console.
I have some sql files in my project and would like to execute them directly instead of copying them to the database console. However, I am not able to use the same data sources that I created for the console when setting up connections from the DB Connections drop down.
I wonder how to run SQL statements from sql files with the same data sources as the ones I defined in the db console?
Thanks
You can right-click on an sql file in the editor and there is an option "Run myfile.sql"
To run SQL statements from sql files You must associate with file type
see how
I have exported database from Oracle SQL developer tool into .sql file. Now I want to run this file which is of size 500+ MB.
I read about running scripts here, but I didn't understand the way. Is there any command or query by which we can run this sql script by providing path?
You could execute the .sql file as a script in the SQL Developer worksheet. Either use the Run Script icon, or simply press F5.
For example,
#path\script.sql;
Remember, you need to put # as shown above.
But, if you have exported the database using database export utility of SQL Developer, then you should use the Import utility. Follow the steps mentioned here Importing and Exporting using the Oracle SQL Developer 3.0
You need to Open the SQL Developer first and then click on File option and browse to the location where your .sql is placed. Once you are at the location where file is placed double click on it, this will get the file open in SQL Developer. Now select all of the content of file (CTRL + A) and press F9 key. Just make sure there is a commit statement at the end of the .sql script so that the changes are persisted in the database
You can use Load function
Load TableName fullfilepath;
i have a 3GB sql file and i can't open directly on management studio,so,i have to split the file and execute the parts.but,how i'll split the file? or execute directly without outOfMemory exception?
i'm using SQL SERVER 2014 and i didn't have sucess restoring the .sql with cmd..
I have faced this before..Use sqlcmd utility.. very easy to use.. in this case you just have to give the path of one big script file with few other parameters. Refer to microsoft documentation.
Hope that helps
that sucks. you should be able to load the .sql file via command line. this is how most data warehousing companies load large db/sql files in order launch databases. this should NOT be opened with any IDEs and loading it via command line is the only way it's done.
if I were you I'd try to load the file via cmd again because that's the way to do it.
Is there way to access visual studio environment variables such as $(SolutionDir) in T-sql seeding script? Basically, I have a set of seeding script, but I try to avoid reference the hard-code path such as C:\projects**\seeding.sql, instead , I want to use $(SolutionDir)Seeding.sql
Here is the general process for using project variables in SQL in your database project from MSDN:
How to: Define Variables for Database Projects
This depends on what version of Visual Studio you're using (and whether you're using a database project -- highly recommended), but the general idea is you can assign some of your project variables as SQLCMD variables, build a SQLCMD script using those variables and when you run a build, the system will use SQLCMD to invoke your script, applying the variables as necessary to run your script.
Even more to the point, the database project's recommended practice for "seeding" scripts and other post-build artifacts is to designate one SQL script as the post-build script and have it use SQLCMD syntax to invoke other .sql script files.
So in my project (for example) I have this SQL script in a Scripts folder:
:r .\PopulateNumbersTable.sql
:r .\Insert.Dimension.Date.sql
:r .\FillInMetaData.sql
:r .\Permissions.sql
Which then calls each of those scripts (also in the folder using relative paths) which are run in SQLCMD mode. So in PopulateNumbersTable.sql for example, I set the max value of the numbers table as a SQLCMD variable so when the script runs in the post-build it uses that value.
Anyway, check out SQLCMD and VS Database Projects, it basically covers what you're looking for, it's plenty flexible and (for me) pretty easy to understand.
Visual Studio has a Database Project for Sql Server. This has a number of advantages: it hosts configuration settings, and database objects in one place. The .sql files are part of the regular .NET solutions - visible in the Solution Explorer and editable in Visual Studio. And they have a mechanism for generating a deployment script. With each individual database object in it's own file, the tracking of changes and source control is greatly simplified.
Has anyone had any success with using Database Projects with "non-SQL Server" databases? We use Sybase - which uses T-SQL and is very similar to SQL Server so I'm hopeful.
Or is there an alternative approach? I guess I could use a standard project (.csproj) and call a custom commandline application as part of the post-build to convert the .sql files into a deployment script.
Any ideas would be welcome.
Thanks
OK, I'll answer my own question.
I added all of our SQL objects to their own .sql files within a Visual Studio .dbproj project. However, minor syntactic incompatibilities between the Sybase version of RAISERROR and the Microsoft version of RAISERROR caused the validation code built into Visual Studio to get unhappy. The problem with the database project was that this actually caused a compilation error - which basically made it into a show-stopper.
So I scrapped that idea and added the .sql files to a standard .csproj project file. I then implemented some custom code that would load all of the .sql files, and aggregate them into a deployment script when invoked. I added a call to the custom code to the post build of the .csproj file so that whenever it was compiled - it would output a deployment script - which works like a dream with our build server.
In order to get some of the benefits of the .dbproj, I looked into writing a full SQL parser, but was quickly discouraged by some of the posts on SO. Instead I did some rudimmentary parsing with regex - which got me a few cool features without a lot of effort:
The code could detect dependencies between the various .sql files, and add them to the deployment script in the correct order to avoid sysdepends warnings.
Where there were no dependencies, objects were ordered based on the object type (stored procedure, function, grant statement, etc) and then by name so that the resulting script was always ordered the same - which is very important if you need to diff two versions of the script.
The deployment script can figure out some of the required permissions, so I don't need to keep track of all of the GRANT statements.
Stored procedures that are in the database but not in the script can be dropped automatically - so I don't need to keep track of what state each database is in - we just run the script and everything is in the correct state.
We have a few stored procedures that our automated tests call that shouldn't be deployed. The code can detect these and include them in a Debug build and exclude them in a Release build.
The custom code also generates a diff script that determines what changes the deployment script will make to a database and prints them out. This allows the person who is running the script to get an idea of what it will do. For example, the diff script might tell them that no changes will be made - so they don't need to run the deployment script at all - which is kind of handy if it saves them logging in at 3am to take a database offline and take backups etc.
So the end result is that all of my SQL objects are in separate files making them easy to work with in Visual Studio and manage under source control. For the first time since I started this job, I can look at the history in source control and tell what files have been changed (before this we had one enormous .sql file with absolutely everything in it).