I'd like to specify the database schema for a query by comment. I know that you can do it for files in the settings of intelliJ. But since this information is stored in the intellij configurations (I guess) which aren't shared in our company through git, those information are lost when the project is shared through a VCS like git. And so would other people not get correct validation of the queries.
What I'd like to do is basically something like this:
#schema=foo
SELECT * FROM bar;
Which would be the same as if you write:
SELECT * from foo.bar;
For what? Basically just for code completion and validation from intellij that your query is correct and has no syntactic or logical errors. Does anyone know if there's a plugin or hidden functionality? I searched around google but didn't find anything.
E: Nice would be if you can specify those comments for the whole file or only for single queries (first one would be better, second optional)
E2: It maybe looks strange why I don't just write the second example with the schema. But if I don't write that I can load the file to e.g. java and specify the schema dynamically in my source code through the database connection.
Just use appropriate "use statements" in sql file/console. IntelliJ IDEA honors them when doing resolve, and code completion. This is implemented so that you would have the same experience when executing the sole script or within IDE. This way the script will be valid from both points: stand alone execution and IntelliJ IDEA intellisense.
Related
I have set up a project in DataGrip with several sql files spread over a couple of directories like this:
My hope is to manage the complexity as this turns into hundreds of files. This is a learning/proof of concept level effort right now.
What I want to do is have a way to run/build/publish this project but at present the best I have found is to select the files and then do a "Run Files" CTRL+SHIFT+F10. This worked for a bit but now I have a foreign key that gets run in the wrong order. I don't want to have to make a hack like prefixing the file names with integers to force a specific order. It feels like a real kludge.
How should I accomplish this, I must have missed something since the alternative is very manual and error prone. If it matters the database I am working against is Oracle.
Since DataGrip 2020.1 one can create a Run Configuration and specify data source and multiple files or scripts:
Refer to DataGrip blog post.
I've got a question about building a deployment script using SSDT.
Could anyone tell me if it's possible to build a deployment script using SQLPackage.exe where the source file is NOT a dacpac file, but uses the .sql files instead?
To give some background, I've created a project in Visual Studio 2012 for my database schema. This works great, and SSDT builds the folder structure without a problem (functions, stored procedures etc which contain all the .sql files).
Here's the problem - the database in question is from a legacy system, and is riddled with errors. Most of these errors we don't care about anymore and it's not practical or safe to fix them all, so for years we've basically ignored them. However it means we can't build the project and therefore can't generate the dacpac file. Now this doesn't prevent us from doing the schema compare and syncing the database with the file system (a local mercurial repository). However it does seemingly prevent us from building a deployment script.
What I'm looking for is a way of building the deployment script using SQLPackage.exe without having to generate the dacpac file. I need to use the .sql files in the file system instead. Visual Studio will produce a script of the differences without building the dacpac, so this makes me think it must be possible to do it using SQLPackage.exe using one of the parameters.
Here's an example of SQLPackage.exe which I'd like to adapt to use the .sql files instead of the dacpac:
sqlpackage.exe /Action:Script /SourceFile:"E:\SourceControl\Project\Database
\test_SSDTProject\bin\Debug\test_SSDTProject.dacpac" /TargetConnectionString:"Data
Source=local;Initial Catalog=TestDB;User ID=abc;Password=abc" /OutputPath:"C:
\temp\hbupdate.sql" /OverwriteFiles:true /p:IgnoreExtendedProperties=True
/p:IgnorePermissions=True /p:IgnoreRoleMembership=True /p:DropObjectsNotInSource=True
This works fine because it uses the dacpac file. However I need to point it at the folder structure where the .sql files are instead.
Any help would be great.
As has been suggested in comments, I think that biting the bullet and fixing the errors is the way ahead. You say
it's not practical or safe to fix them all,
but I think you should give this a bit more thought. I have recently been in a similar situation to you, and the key to emerging from it is to realise that the operational risk associated with dropping procedures and functions that will throw an exception as soon as they are called is zero.
Note that this does not apply if the reason these objects won't build is that they contain cross-database or cross-server references that are present in production but not in your project; this is a separate problem altogether, but also a solvable one.
Nor am I in favour of "exclude from build" as an alternative to "delete"; a while ago I saw a project where this technique had been deployed extensively; it makes it harder to see what does what from the source files and I am now of the opinion that "Build Action=None" is simply "commenting out the bits that don't work" for the Snapchat generation.
The key to all of this, of course, is source control. This addresses the residual risk that one day you might indeed want to implement a working version of one of your currently non-working procedures, using the non-working code as a starting point. It also obviates the need to keep stuff hanging around in the solution using Build Action=None, as one can simply summon an earlier revision of the code that contained the offending objects.
If my experience is any guide, 60 build errors is nothing; these could easily be caused by references to three or four objects that no longer exists, and can be consigned to the dustbin of source control with some enthusiastic use of the "Delete" key.
Do you have a copy of SQL Compare at your disposal? If not, it might be worth downloading the trial to see if it will work in your scenario.
Here are the available switches:
http://documentation.red-gate.com/display/SC10/Switches+used+in+the+command+line
At the very least you'll need to specify the following:
/scripts1:
/server2:
/database2:
/ScriptFile:
I have a mavenized sql project, which has a few SQL scripts. These scripts are basically Oracle statements of the form
insert xyz into some_table;
show errors
These files are to be copied to a deploy project, but for the statements to work, some modifications have to be made.
For the moment, we are editing files manually (with the help of Notepad++) so that they turn out like this
insert xyz into some_table\
Is there any way to do this with maven ?Im not sure this is exactly resource filtering, and i'm pretty sure that this is not the purpose of Maven. I've been looking at the maven assembly plugin for the moment, but no luck.
Any thoughts will be appreciated.
Thanks in advance
If it's a fairly simple text replacement, then you can use maven-replacer-plugin:
http://code.google.com/p/maven-replacer-plugin/
If all else fails, you can also use maven-antrun-plugin to do anything that ant can do. Here is a prior question that discusses this:
Full search and replace of strings in source files when copying resources
I'd like to store a database schema in its own file, and have my Scala code retrieve it (and execute it via JDBC).
It seems to me that sbt wants me to store the file as: src/main/resources/packagename/my.sql. Putting it there, I see it's in the jar - but I can't seem to access it from Scala.
Specifically, getClass().getResource("my.sql") returns a null pointer, and so does any other form I can think of.
How should I load the file? Or is there a better way to do it?
I had an almost identical problem.
The only difference is my file is in src/main/resources (without any packages).
This worked for me.
val is:InputStream = Github.getClass().getResourceAsStream( "/repo.json" );
Why don't you generate a file, and store it as "my.sql", and later search for it, where it appears in the filesystem?
I'm trying to use the database project in VS2010, but my setup is a bit different from standard and I can't find an easy way to get it to work.
I have a "model" project which contains some xml model definitions of a simple information for an ETL process. As well as the schema for the supplied information, it contains other metadata, for example details of which columns need to be matched up with other tables, what to do in case of a non-match, etc, etc.
Using T4 templates, I then generate sql scripts, views and tables to manage the whole thing - one sql file per xml file. There are around 30 xml definitions, but the number of parameters is small and the pattern very repetitive, so it works well.
I want to dump these sql files into the database project, in order to get it to generate the deploy scripts and identify database changes for me. I can arrange for the files to be combined into one script. Is there a way to get VS to analyse the scripts automatically, or do I need to import them every time?
EDIT: I originally asked about getting VS not to split my scripts up into individual components. I found a solution to this: copy the existing script into the project, and - crucially - change the "build action" for the script to "build" (for some reason, default is "not in build"). VS will then add the item into the model and it will be part of the generated scripts - yay! However, still no way to reference scripts in other projects...
I've read the MS how-to for database projects, but didn't find anything in it that seemed relevant
Thanks for your help,
You can do this with T4 Toolbox. Here is how: http://www.olegsych.com/2010/03/t4-tutorial-integrating-generated-files-in-visual-studio-projects/. Specifically, you want to take advantage of the Template.Output.File and Template.Output.Project properties.
Oleg