I have a mavenized sql project, which has a few SQL scripts. These scripts are basically Oracle statements of the form
insert xyz into some_table;
show errors
These files are to be copied to a deploy project, but for the statements to work, some modifications have to be made.
For the moment, we are editing files manually (with the help of Notepad++) so that they turn out like this
insert xyz into some_table\
Is there any way to do this with maven ?Im not sure this is exactly resource filtering, and i'm pretty sure that this is not the purpose of Maven. I've been looking at the maven assembly plugin for the moment, but no luck.
Any thoughts will be appreciated.
Thanks in advance
If it's a fairly simple text replacement, then you can use maven-replacer-plugin:
http://code.google.com/p/maven-replacer-plugin/
If all else fails, you can also use maven-antrun-plugin to do anything that ant can do. Here is a prior question that discusses this:
Full search and replace of strings in source files when copying resources
Related
I'd like to specify the database schema for a query by comment. I know that you can do it for files in the settings of intelliJ. But since this information is stored in the intellij configurations (I guess) which aren't shared in our company through git, those information are lost when the project is shared through a VCS like git. And so would other people not get correct validation of the queries.
What I'd like to do is basically something like this:
#schema=foo
SELECT * FROM bar;
Which would be the same as if you write:
SELECT * from foo.bar;
For what? Basically just for code completion and validation from intellij that your query is correct and has no syntactic or logical errors. Does anyone know if there's a plugin or hidden functionality? I searched around google but didn't find anything.
E: Nice would be if you can specify those comments for the whole file or only for single queries (first one would be better, second optional)
E2: It maybe looks strange why I don't just write the second example with the schema. But if I don't write that I can load the file to e.g. java and specify the schema dynamically in my source code through the database connection.
Just use appropriate "use statements" in sql file/console. IntelliJ IDEA honors them when doing resolve, and code completion. This is implemented so that you would have the same experience when executing the sole script or within IDE. This way the script will be valid from both points: stand alone execution and IntelliJ IDEA intellisense.
I have set up a project in DataGrip with several sql files spread over a couple of directories like this:
My hope is to manage the complexity as this turns into hundreds of files. This is a learning/proof of concept level effort right now.
What I want to do is have a way to run/build/publish this project but at present the best I have found is to select the files and then do a "Run Files" CTRL+SHIFT+F10. This worked for a bit but now I have a foreign key that gets run in the wrong order. I don't want to have to make a hack like prefixing the file names with integers to force a specific order. It feels like a real kludge.
How should I accomplish this, I must have missed something since the alternative is very manual and error prone. If it matters the database I am working against is Oracle.
Since DataGrip 2020.1 one can create a Run Configuration and specify data source and multiple files or scripts:
Refer to DataGrip blog post.
I've got a question about building a deployment script using SSDT.
Could anyone tell me if it's possible to build a deployment script using SQLPackage.exe where the source file is NOT a dacpac file, but uses the .sql files instead?
To give some background, I've created a project in Visual Studio 2012 for my database schema. This works great, and SSDT builds the folder structure without a problem (functions, stored procedures etc which contain all the .sql files).
Here's the problem - the database in question is from a legacy system, and is riddled with errors. Most of these errors we don't care about anymore and it's not practical or safe to fix them all, so for years we've basically ignored them. However it means we can't build the project and therefore can't generate the dacpac file. Now this doesn't prevent us from doing the schema compare and syncing the database with the file system (a local mercurial repository). However it does seemingly prevent us from building a deployment script.
What I'm looking for is a way of building the deployment script using SQLPackage.exe without having to generate the dacpac file. I need to use the .sql files in the file system instead. Visual Studio will produce a script of the differences without building the dacpac, so this makes me think it must be possible to do it using SQLPackage.exe using one of the parameters.
Here's an example of SQLPackage.exe which I'd like to adapt to use the .sql files instead of the dacpac:
sqlpackage.exe /Action:Script /SourceFile:"E:\SourceControl\Project\Database
\test_SSDTProject\bin\Debug\test_SSDTProject.dacpac" /TargetConnectionString:"Data
Source=local;Initial Catalog=TestDB;User ID=abc;Password=abc" /OutputPath:"C:
\temp\hbupdate.sql" /OverwriteFiles:true /p:IgnoreExtendedProperties=True
/p:IgnorePermissions=True /p:IgnoreRoleMembership=True /p:DropObjectsNotInSource=True
This works fine because it uses the dacpac file. However I need to point it at the folder structure where the .sql files are instead.
Any help would be great.
As has been suggested in comments, I think that biting the bullet and fixing the errors is the way ahead. You say
it's not practical or safe to fix them all,
but I think you should give this a bit more thought. I have recently been in a similar situation to you, and the key to emerging from it is to realise that the operational risk associated with dropping procedures and functions that will throw an exception as soon as they are called is zero.
Note that this does not apply if the reason these objects won't build is that they contain cross-database or cross-server references that are present in production but not in your project; this is a separate problem altogether, but also a solvable one.
Nor am I in favour of "exclude from build" as an alternative to "delete"; a while ago I saw a project where this technique had been deployed extensively; it makes it harder to see what does what from the source files and I am now of the opinion that "Build Action=None" is simply "commenting out the bits that don't work" for the Snapchat generation.
The key to all of this, of course, is source control. This addresses the residual risk that one day you might indeed want to implement a working version of one of your currently non-working procedures, using the non-working code as a starting point. It also obviates the need to keep stuff hanging around in the solution using Build Action=None, as one can simply summon an earlier revision of the code that contained the offending objects.
If my experience is any guide, 60 build errors is nothing; these could easily be caused by references to three or four objects that no longer exists, and can be consigned to the dustbin of source control with some enthusiastic use of the "Delete" key.
Do you have a copy of SQL Compare at your disposal? If not, it might be worth downloading the trial to see if it will work in your scenario.
Here are the available switches:
http://documentation.red-gate.com/display/SC10/Switches+used+in+the+command+line
At the very least you'll need to specify the following:
/scripts1:
/server2:
/database2:
/ScriptFile:
I'm working with gendarme for .net called by Sonar (launched by Jenkins).
I've a lot of AvoidVisibleFieldsRule violations. The main violations are found in the generated files. As I can't do anything on it, i would like to exclude *.designer.cs from the scan.
I can't find a way to do that. There is a properties in Sonar to exclude generated files but it doesn't seem to be applied for gendarme.
Is there a way to do such a thing ?
Thanks for all
Gendarme expects you provide an ignore list,
http://www.mono-project.com/Gendarme.FAQ
https://github.com/mono/mono-tools/blob/master/gendarme/self-test.ignore
The ignore file format is bit of weird, but you can learn it by experiments.
Indeed that is actually not normal at all. Generated code is excluded by the plugin with the standard configuration. What version of the C# plugins are you using ?
Anyway, the configuration property you can try is "sonar.exclusions" (see http://docs.codehaus.org/display/SONAR/Advanced+parameters).
If you do not solve your problem right away, the best thing would be to drop a mail to the user mailing list (see http://www.sonarsource.org/support/support/) and send the verbose output of your build. To get this output simply add "-X" to the command line.
Hope it helps
I wonder what is the Maven way in my situation.
My application has a bunch of configuration files, let's call them profiles. Each profile configuration file is a *.properties file, that contains keys/values and some comments on these keys/values semantics. The idea is to generate these *.properties to have unified comments in all of them.
My plan is to create a template.properties file that contains something like
#Comments for key1/value1
key1=${key1.value}
#Comments for key2/value2
key2=${key2.value}
and a bunch of files like
#profile_data_1.properties
key1.value=profile_1_key_1_value
key2.value=profile_1_key_2_value
#profile_data_2.properties
key1.value=profile_2_key_1_value
key2.value=profile_2_key_2_value
Then bind to generate-resources phase to create a copy of template.properties per profile_data_, and filter that copy with profile_data_.properties as a filter.
The easiest way is probably to create an ant build file and use antrun plugin. But that is not a Maven way, is it?
Other option is to create a Maven plugin for that tiny task. Somehow, I don't like that idea (plugin deployment is not what I want very much).
Maven does offer filtering of resources that you can combine with Maven profiles (see for example this post) but I'm not sure this will help here. If I understand your needs correctly, you need to loop on a set of input files and to change the name of the output file. And while the first part would be maybe possible using several <execution>, I don't think the second part is doable with the resources plugin.
So if you want to do this in one build, the easiest way would be indeed to use the Maven AntRun plugin and to implement the loop and the processing logic with Ant tasks.
And unless you need to reuse this at several places, I wouldn't encapsulate this logic in a Maven plugin, this would give you much benefits if this is done in a single project, in a unique location.
You can extend the way maven does it's filtering, as maven retrieves it's filtering strategy from the plexus container via dependency injection. So you would have to register a new default strategy. This is heavy stuff and badly documented, but I think it can be done.
Use these URLs as starting point:
http://maven.apache.org/shared/maven-filtering/usage.html
and
http://maven.apache.org/plugins/maven-resources-plugin/
Sean