Management tools for configuration properties in the same style as tools like liquibase and flyway? - configuration-management

I'm wondering if there are any existing tools that allow for management on configuration properties (like Java properties files, or other similar formats) in the same style as database migration tools like liquibase and flyway.
I'd like a better way of managing my application's configuration properties which would allow me to track their evolution over time.
Does something like this exist?

Isn't it enough if all config is externalized and version controlled? Tracking evolution then is just a matter of version history. Unlike db migration scripts, config doesn't execute to upgrade something to version x to verion y. The continuous delivery book says:
Generally, we consider it bad practice to inject configuration
information at build or packaging time. This follows from the
principle that you should be able to deploy the same binaries to every
environment so you can ensure that the thing that you release is the
same thing that you tested.
And then there is this advice to store it all as environment variables.

Related

Is there any way to add some controls before commit in Intellij?

We are changing our technology from c# to Java. We are using IntelliJ Community Edition as IDE. While using Visual Studio, we have custom check in policies to avoid developers from doing things we don't want according to our standards. We also want this for IntelliJ before commits. We want to protect our project structure according to our standards. Is there any way to realize my this wish?
That is generally possible, but implementation depends on the VCS in use.
E.g. in Git, one can use local (to verify on commit, e.g. pre-commit) and server-side (to prevent pushing incorrect changes, e.g. pre-recieve) hooks.
IF TFS is used, existing TFS integration plugins allow implementing custom Checkin policies as additional custom plugins. There are some implemented already

Database repository (Oracle) vs File system as a repository for Pentaho

I want to use Pentaho for my work. After a bit of research I found that to store the ktr/kjb files I can have either database as a repository or I can use file system as a repository. However I don't find any benefits of using database as a repository over file system. The basic purpose of repository here is to create a common location where I can keep all the developed ktr/kjb files in production environment. Basically if I consider the database repository, it will hold all the developed ktr/kjb files in production and every time I need to run a job/transformation I will connect to database to get the respective ktr/kjb file (similar to how informatica stores transformation) on the other hand file based repository will be like a folder holding all the developed files.
Can somebody here will be able to explain pros and cons of both type of repository?
Please let me know if you need any other information.
Thanks in advance.
When several people develop on the same jobs/transformations, the database repository will hold the changes, and ensure the latest versions.
The pros of a filesystem is of course ease of backup, no database connection that can trouble you, and the possibility to use other, more modern and mature version control systems for the files, than the database repositories use.
If you are using the free community edition, I would definitely go with the file repository, along with external file-based version control and migration systems. If you are using the enterprise edition, then you might want to consider the database repository, since you can then use Pentaho's built-in version control and migration systems.

How to install TinkerPop

I have just recently come across graph databases and Tinkerpop.
I am somewhat confused on how/what to install to use Tinkerpop 2.5.0/2.6.0. Does it have to be installed on each Database separately (as you would a plugin) or can I set it up and then use it to access different supported software.
My goal is to use it to try out 2 (possibly more) different databases (mainly Neo4j and OrientDB or perhaps Titan) and be able to query them using Gremlin.
How you use TinkerPop is entirely dependent on what you intend to do with it. If you are just getting started, I suggest you simply download the Gremlin distribution, unpackage it and start the console with bin/gremlin.sh. Working in the REPL will help you learn quickly as the feedback time for trying things out is basically instantaneous. Even as your Gremlin code makes its way to production, you will find the Gremlin Console to be a good friend as it provides a way to try out ideas before committing them to code. It also provides a mechanism for maintaining/administering your database with Gremlin.
If you intend to use TinkerPop in a JVM-based application then you will want to use a dependency management tool like Maven and reference the appropriate TinkerPop dependencies you'd like to use. Alternatively, I suppose you could try to manually manage the dependencies by downloading them individually from Maven Central and adding them to your path (though I wouldn't recommend that for obvious reasons). I guess my point for suggesting that, is to just make it clear that the TinkerPop library is just a set of jars that can be included in your JVM development tools like any other.
How you work with a particular database is dependent on the one that you choose, but again the process is little different than what I described above. Neo4j is packaged with the Gremlin Console, so you can work with it right away in there. For OrientDB, you will want to copy those dependencies into the Gremlin Console path (i.e. the /lib directory). If you are building an application, then maven is again your friend and you simply reference the Neo4j or OrientDB maven coordinates and all require dependencies will come with it.
Some implementations, like Titan, have separate prerequisites (e.g. install cassandra or hbase). In those cases, you will need to refer to their documentation for specifics on how to set them up.
All that said, if you are just getting started, I recommend that you look into TinkerPop3. It is the next major line of development for TinkerPop and quit different from it's previous incarnations. It does not yet have all the of the implementations in play as of yet, but database vendors are at work to bring them online. All that I wrote about TinkerPop 2.x "installation" above generally applies to TinkerPop3, however, the TinkerPop3 Gremlin Console does have a plugin system that can help make it a little easier to bring in external dependencies, preventing you from having to worry about dealing with them manually.

How to upload new/changed files from development server to the production one?

Recently I started to incorporate good practices in my development workflow, so I split the development server and the production one. I also incorporated a versioning system using Subversion (Tortoise SVN).
Now I have the problem of synchronize the production server (Apache shared hosting) with the files of the last development version in my local machine.
Before I didn't have this problem because I worked directly with the server files through Filezilla. But now I don't know how to transfer the files in an efficient way and what are the good practices in this aspect.
I read something about Ant and Phing but I'm not sure if this appropiate to me or is unnecessary complexity.
Rsync is a cross-platform tool designed to help in situations like this; I've used it for similar purposes on multiple occasions. This DevShed tutorial may be of some help.
I don't think you want to "authomatize" it, rather establish control over your deployment and integration process. I generally like SVN but it has some bugs and one problem I have with it is that it doesn't support baselining -- instead you need to make a physical branch of your repository if you want to have a stable version to promote to higher environments while continuing to advance the trunk.
Anyway, you should look at continuous integration and Jenkins. This is a rather wide topic to which not a specific answer can be given. There are many ins, outs, what-have-yous. Depends on your application platform, components, do you have database changes, are you dealing with external web services or 3rd party APIs etc.
Maybe out there are more structured solutions but with Tortoise SVN you can export only the files changed between versions in a folder tree structure. And then, upload as always in Filezilla.
Take a look to:
http://verysimple.com/2007/09/06/using-tortoisesvn-to-export-only-newmodified-files/
Using TortoiseSVN, right-click on your working folder and select
“Show Log” from the TortoiseSVN menu.
Click the revision that was last published
Ctrl+Click the HEAD revision (or whatever revision you want to
release) so that both the old and the new revisions are
highlighted.
Right-click on either of the highlighted revisions and select
“Compare revisions.” This will open a dialog window that lists all
new/modified files.
Select all files from this list (Ctrl+a) then right-click on the
highlighted files and select “Export selection to…”
Side note:
You have to open more details about your workflow and configuration - applicable solutions depends from it. I see 4 main nodes in game: Workplace, Repo Server, DEV, PROD, some nodes may be united (1+2, 2+3), may have different set of tools (do you have SSH, Rsync, NFS, Subversion clients on DEV|PROD). All details matter
In any case - Subversion repositories have such thing, as hooks, in your case post-commit hook (executed on Repository Server side after each commit) may be used
If this hook (any code, which can be executed in unattended mode) you can define and implement any rules for performing deploy to any target under any conditions. You must only know
Which transport will be used for transferring files
What is your webspaces on servers (Working Copies of just clean unversioned files - both solution have pro and contra sets) - it will define, which deployment-policy ("export" or "update") you have to implement in hook
Some links to scripts, which export files, affected by revision (or range of revisions) into unversioned tree

msbuild, support for out-of-source builds

Is is possible to use MSBuild to make out-of-source builds: a build outside source directory?
This is a standard thing in some other building systems, like the Autotools or CMake. They are useful when you want to experiment with build options or share one source tree (which can be huge).
For those who ask why is such thing needed: With this, I could do a checkout (4GB here), make one build, revert to some specific revision and do another build without throwing away the first one. Or I can make a throw-away configuration with some custom settings without thinking much of going through all the configuration settings in VS. Or share a checkout between multiple automatic builders.
I know I can define separate configurations with different paths, but this is cumbersome (especially when working with multiple projects) and these configurations will propagate to other developers (which I would like to avoid when experimenting) with common VCS operations.
One possible solution could be to move your configuration stuff to a separate .target file. If you like to experiment with those you can replace the .target file with one of your choice while other developers can use the default .target configuration.
I'm still not sure if and why your sources are an issue since you would export them to a configurable build path. Is it because it would take too long to export your sources to a experimental build location? Could you use pre-built shared components for your experimental build?
I suspect that you are expiriencing limitations because you use MSBuild just as CLI for your Visual Studio solution(s). Admittedly MSBuild requires considerably effort to cater for flexible and complex requirements. Maybe a Continous Integration System like CruiseControl (just to name one) is what you are looking for because it offers ease of use and flexiblity you are acustomed from Autotools and CMake. If "free" isn't one of your requirements Team Foundation Server might be an option to drive MSBuild for you.
4GB sources is huge so any given tool will have to work around moving this much stuff to remain fast.