Code and DB Automation Migration Tool - automation

Does someone know of a Code/DB migration tool. I'm looking for a script-able way of automating code pushes from Dev to Production environments. The tool should be language/db agnostic so it can work with multiple DB's and development languages.

You can take a look at this tools (for deployment):
Capistrano + best practices
Maven
Heroku
In case you are talking about binaries, such concept is strongly recommended by the Continuous delivery. Since they consider it as antipattern to promote at the source-code level rather than at the binary level. Use of CI tools to trigger production deployments or you could easily write a simple interface to drive your deploy scripts. Alternatively there are numerous tools available (open source as well as enterprise) which can be used to drive your deployments. Jenkins is quite popular.

Related

Good way to distribute compiled files for testers

What would be a good way to distribute compiled code to testers for quality inssurance ? Like dll's or .fas files for AutoLISP.
So far it's included in the source control which is bad practice and hard to maintain. We would like to avoid giving access to code to our testers.
You can use a repository like Artifactory. Your build can publish the artifacts to Artifactory and you can give testers access to Artifactory but not to your codebase.
But if they are writing automated tests, this is an antipattern IMO. Tests should be stored alongside the code in the same repository.

Automatically refresh/compile in IntelliJ in a "nightly dev machine update"

We are using a small tool to automatically fetch updates for various projects from Git/SVN, recompile them and run tests locally with any local modifications that the developer might have developed, but not yet submitted to the global code repositories.
For some large projects, we see that the IntelliJ IDE only does refreshing/recompiling of code when the developer comes in and actually starts to work in the IDE, which always causes some time in the morning when the machines are busy recompiling, thus hindering the developers shortly after they came in.
I would like to do such a refresh/recompile already during the nightly update, so it is not wasting dev-time in the morning.
For Eclipse we are using https://github.com/moschinski/MondShell, a plugin which provides remote control functionality.
I tried to look for tools to automate things in IntelliJ, but could not find anything that would suit.
Are there any plugins or other means of remotely controlling IntelliJ to force it to recompile code and update source repositories?
As I could not find anything which could do this I started a small plugin which provides a small REST interface in order to control things in IntelliJ from scripts.
See https://github.com/centic9/IntelliJ-Automation-Plugin for the implementation details.

Organization and Structure of Web Application Testing Framework

So I'm looking to bring web application testing into our .Net environment with a framework such as Selenium. At first, it'll probably be the developers writing the tests, but later it may be just the QA team. I'm wondering where the tests should actually live. Should they live in the same solution that the web application lives or should they live in a completely separate solution that is just for the tests? Please, note these are regression tests that will be done via automating a web browser so access to the web app's assemblies is not required. The answer probably is based on the environment and other factors, but I'm curious about what other people have done in this situation.
Regression Testing covers both Unit and Functional Tests. Functional tests exercise the complete program with various inputs. Unit tests exercise individual functions, subroutines, or object methods.
Unit Tests are part of the solution's code and should live with the Primary Code as with Microsoft MVC. Since Functional Tests examine the whole system and not just components, they can live anywhere. However, since your Functional Test are automated scripts, they should be included inside the solution.
The advantage to having both Functional and Unit tests live with the code is the issue of project management. Having all project related files in one repository links code version with test version. Testing scripts need to be stored in a repository (version control system) just like any other project code, so it is good to keep them with the solution.
That way the test team can do white box testing (testing with access to code) by checking-out the solution just like a developer. Their work can be saved, shared, and documented inside Visual Studio. Microsoft even includes some web based management tools with Team Foundation Server that can be used for managing the testing with open communication between test team and developers.

Hadoop development environment, what yours looks like?

I would like to know what yours Hadoop development environment looks like?
Do you deploy jars to test cluster, or run jars in local mode?
What IDE do you use and what plugins do you use?
How do you deploy completed projects to be run on servers?
What are you other recommendations about setting my own Hadoop development/test environment?
It's extremely common to see people writing java MR jobs in an IDE like Eclipse or IJ. Some even use plugins like Karamasphere's dev tools which are handy. As for testing, the normal process is to unit test business logic as you normally would. You can unit test some of the MR surrounding infrastructure using the MRUnit classes (see Hadoop's contrib). The next step is usually testing in the local job runner, but note there a number of caveats here: the distributed cache doesn't work in local mode, and you're singly threaded (so static variables are accessible in ways they won't be in production). The next step (and most common test environment) is pseudo-distributed mode - all daemons running, but on a single box. This is going to run code in different JVMs with multiple tasks in parallel and will reveal most developer errors.
MR job jars are distributed to the client machine in different ways. Usually custom deployment processes are seen here. Some folks use tools like Capistrano or config management tools like Chef or Puppet to automate this.
My personal development is usually done in Eclipse with Maven. I build jars using Maven's Assembly plugin (packages all dependencies in a single jar for easier deployment, but fatter jars). I regularly test using MRUnit and then pseudo-distributed mode. The local job runner isn't very useful in my experience. Deployment is almost always via a configuration management system. Testing can be automated with a CI server like Hudson.
Hope this helps.

Is this the right approach for structuring codebase?

We have a Java codebase that is currently one Web-based Netbeans project. As our organization and codebase grows it seems obvious that we should partition the various independent pieces of our system into individual jars. So one Jar library for the data access layer, one for a general lib, one for a specialized knowledge access, etc. Then we'd have a separate project for the web application, and could have one for a command line tools app, another web app eventually, etc.
What is the recommended practice for doing and managing this? Is it Maven? Can it all be effectively done with just Netbeans alone by simply creating individual projects and setting the dependecies of one project on the jar files of the others?
I'd agree with SteveG above on using Maven2 to help you modularise your code base, but I'd use Nexus as the local repository for Maven instead of Archiva. The guys at Sonatype also have an excellent (free html/pdf) book on how to use Maven, Nexus, and integrate it into IDEs.
Be careful on how you decide to partition up your projects, though. There's no sense in over-complicating your dependencies just for the sake of it.
I would definitely say check Maven(2) out. It is very good for doing this sort of thing. You can define individual models and version then very easily. Netbeans also does a decent job of integrating with.
Also I suggest you set up Archiva which will let you be dependent upon binaries of other artifacts that your company generates internally. This also acts as a proxy and will keep a local copy of any external dependencies your projects might have so its very quick to get the new versions internally.
I would create ant scripts to build the pieces and for deployment. Then you are not depending on your IDE for build/deployment.
It sounds like your code is getting to the point where you're graduating from the WAR approach and have entered into the EAR level.
An EAR is just another archive that contains all the other JARs and WARs that get combined to create an application. There are four types of modules that can reside inside it, Web, EJB, Connectors and Utilities. Most people only use Web and Utilities so they go with using the WEB-INF/lib approach.
But if you're starting to get a lot of interdependencies what you do create an EAR project and make your web project a child of it. Each Utility JAR which is just straight Java code used by other modules also becomes a child of the EAR. Finally in each of your projects there should be a META-INF/manifest.mf file that just has the name of the JARs that JAR/WAR depends on.
I'm an eclipse guy and most of this gets taken care of for you in eclipse, but I'm sure netbeans has very similar functionality.
Now the only problem is that you have to use a full Java EE server to deploy an EAR so I don't think you can use Tomcat if that's what you're currently using.