How do you distribute the IDE and it's configuration within your Team? - ide

I'm wondering how Software Development Team distribute their Standard IDE(s)?
E.g. developing with Eclipse, custom Code formatter, svn Resository, Copyright Header..
At the moment my Team has a standard zip File which is then distributed withhin the developers.
Problem:
If one file, a Plugin or the IDE itself changes, e.g. new Coding Guidlines, Upgrade Eclipse 3.5.1 the whole distribution has to be done again. Every developer needs to unzip the bundel again. Imagine your working with different Workspaces (Jetty, different Tomcamt Versions, WTP) due to Project History That doesn't scale
I know that there are some related Articels
A new version of Eclipse just came out. Is there anything I can do to avoid having to manually hunt down my plugins again?
Manage Your Eclipse Install With A Local Git Repository
And some comercial Programs.
Eclipse also has a new Update-Installer Approach
But I don't see the Killer App. How do your team solve this? Is there a best practice?
I guess best would be a Program letting you choose your current Project and then downloads the configured IDE from the Server and leting you know if Project Config Files are Updated

For eclipse look at Buckminster it targets exactly your target I suppose, didn't use it personally through.

At my previous company they wrote a custom update agent that pulled from a centrally configured server which was updated by the team leaders. It worked well, until people wanted to install their own plugins.
Basically, a developer wanted a plugin, fought in futility to get it included in the default (managed) repo, installed it himself, then updates broke on his machine when the team lead had a sudden stroke of common sense and included it.
They never did come up with a 'good' way to manage it. But, at least they didn't put us all on terminal servers with thin clients.

Related

automate setup of IBM RAD and Websphere

In a project we a forced to use IBM RAD and Webspher Application Server (6.1).
Setting up the development environment is currently described in about 10 pages of wiki documentation and takes about a day if you don't do any mistake. The main parts are:
Installing the IBM Installer;
Use it to install RAD
Install a patch to the Installer;
use it to install half a dozen patches to RAD
create a network drive pointing to ...
checkout project source to ...
install WAS
configure the a WAS instance with two jdbc drivers, 6 datasources, a queue ...
I think you get the idea
I'd like to automate that process (or at lest 95% of it) to something like.
start script x.
On prompt enter a directory with at least yGB of memory available.
Get yourself a cup of coffee
start working.
What are the proper tools to get this working? Should I use something like puppet and chef? Or is that overkill and I can just zip the installation directory and change 2 registry entries?
Has anybody experience with this? Any pointers to get started?
You can script the configuration of WAS using wsadmin:
http://pic.dhe.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=%2Fcom.ibm.websphere.base.doc%2Finfo%2Faes%2Fae%2Fwelc6topscripting.html
It is some effort to learn how to do so but in the end it saves a lot of time. You need to use Jython or Jacl to do so.
WAS profiles can be created headless with a response file. Use manageprofiles.bat in bin directory of WAS to do so.
Regarding RAD installation you can install the IBM Installation Manager version you need to install the patches right away and then install everything in one shot. Add the fixes you need as Repositiories right from the beginning. The fixes will be installed instead of the old versions in this case. You should have the base images and all fixes on the local disk to do so.
The installation of RAD itself can also run in headless mode but I don't have any experience in doing this.
The configuration of the RAD workspace is the next thing you want to automate. This is not so simple to do. The simplest thing you can do is to export the workspace preferences of a workspace that contains all settings to an eclipse preference file (.epf). File -> Export
This is not a complete solution but may help you a bit. Be sure to keep all settings in just one file and import that into a fresh workspace.
Use Notepad++ TextFX plugin to sort the settings in the epf file. You can then figure out which settings you need just by looking at them.
More control over the workspace settings and automated conifiguration requires accessing eclipse internal APIs and some coding.
Regarding the the project sources it depends on the SCM you are using.

Eclipse 3.7 RCP Application with multiple plugins

What is the right way to make an RCP application that is “ready for plugins”? I have struggled to do this basic concept and am trying to accomplish this in Eclipse 3.7 (latest 3.x version).
Step 1
I would like to explore this by using 3 eclipse plugin projects:
• HelloWorldRCP
• HelloWorldPluginA
• HelloWorldPluginB
Would it make sense to make HelloWorldRCP with all the common things such as a menu-bar with an Edit menu including cut, copy and paste menu items? The HelloWorldPluginA could add an additional menu-item called “Alpha” and HelloWorldPluginB could add yet another menu-item called “Beta”? However, the cut, copy and paste functionality could still work within Plugin A and B?
Step 2
Next, how do I deploy this as a “product”? I have made a new product configuration and defined the dependencies from the default runtime configuration that was made. I do notice that there are a lot of dependency jars that are included, but I don’t think I use them. For example, I don’t use data-binding to my knowledge, but it keeps coming up as a required dependency.
I go to Export | Eclipse Product and an executable environment is created in my desired folder. However, when I copy this to another machine it seems to keep referencing the original machines Java installation location. How does one get around this?
I have tried to bundle a jre with the Product Export but nothing is created. I have also just tried copying my jre6 as a jre folder. This does seem to work.
Next problem here is the 32/64 bit Java execution environments. What is advised here? I have been aiming to build on 32 bit only and then hopefully that will run on 32 or 64bit platforms. Is this correct?
Step 3
I need to web-start this now. The old way of initiating an Eclipse 3.5 application, using a startup.jar has changed. I now use the equinox launcher and reference it in the jnlp instead of the startup.jar. However, I keep getting an exception which seems related to the 32/64 bit equinox win32_64 jar. I notice that the export writes a folder and not a jar. I read somewhere that this is a “clever trick” to allow compatibility for both 32 and 64bit runtime environments.
The problem here is that I need a jar and not a folder so that I can sign the jars required and deploy accordingly.
Does anyone have a Java Web-start example for and Eclipse 3.7 RCP application? Or any advice?
You are going to need a lot of time to learn everything you've asked about here.
Here is one of the best places to start... http://www.vogella.com/eclipse.html
That site covers a lot of basics. But you need a little more than basics.
The best example of a working RCP product with some of the features you require can be found at ... http://max-server.myftp.org/trac/mp3m
This guy (Kai) makes all of the source code available via SVN, and he has some very advanced stuff going on in his application. He also has a good blog with some advanced RCP tips and tricks. http://www.toedter.com/blog/
Another thing you'll want to investigate is Tycho. I realize that you didn't mention anything about building your application, but I've found that using Tycho for building has made my most recent foray into Eclipse RCP 100 times better than the other times I've done RCP work. So, my advice, get to know Tycho. http://wiki.eclipse.org/Tycho/Reference_Card
The learning curve of Eclipse RCP is somewhat steep, but I think it's worth the effort.
Good Luck!

Archivable, replicable releases when building with Maven: is there a right way?

We have a largish standalone (i.e. not Java EE) commercial Java project (10,000+ classes, four or five SVN repositories, ten or twenty third-party libraries) that's in the process of switching over to Maven. Unfortunately only one engineer (in a team of a dozen or so distributed across three countries) has any prior Maven experience, so we're kind of figuring it out as we go.
In the old Ant way of doing things, we'd:
check out source code from three or four repositories
compile it all into a single monolithic JAR
release that (as part of a ZIP file with library JARs, an installer, various config files, etc.)
check the JAR into SVN so we had a record of what the customers had actually got.
Now, we've got a Maven repository full of artifacts, and a build process that depends on Maven having access to that repository. So if we need to replicate what we actually shipped to a customer, we need to do a build against a Maven repository that has all the proper versions of everything. This is doable, I guess, if in (some version of) the (SVN-controlled) POM files we set all the dependencies to released versions?
But it gives our release engineer the creepy-crawlies, because there doesn't seem to be any way:
to make sure that somebody doesn't clobber the copy of foo-api-1.2.3.jar on the WebDAV server by mistake (the WebDAV server has access control, but that wouldn't stop a buggy build script)
to detect it if they did
to recover afterwards
His idea is, for release builds, to use a local file system as the repository rather than the WebDAV server, and put that local repository under SVN control.
Our one Maven-experienced engineer doesn't like that -- I guess because he doesn't like putting binaries under version control? -- and suggests that maybe the professional version of the Nexus server can solve the clobbering or clobber-tracking/recovery problem.
Personally, I'm not happy (sorry, Sonatype readers) with shelling out money for a non-free build system when we haven't even seen any benefit from the free version yet, and there's no guarantee it will actually solve the problem.
So our choices seem to be:
WebDAV server
Pros: only one server, also accessible by devs, ...?
Cons: easy clobbering, no clobber-tracking/recovery
Local file system
Pros: can be placed under revision control
Cons: only works with the distribution script
Frankly, both of these seem like hacks to me, and I have to wonder if there isn't a better way to do this.
So: Is there a right thing to do here?
I'm not sure to get everything but I would:
Use the maven-release-plugin (which automates the release process i.e. execute all the steps documented in release:prepare).
Use WebDAV with anonymous read-only and authenticated write policy (so only release engineer can actually deploy released artifacts to the corporate repo).
There is a no need to put generated artifacts under version control (if you have the poms under version control). I don't see the benefits of using the local file system instead of WebDAV (this is not providing more security, you can secure WebDAV as well). I don't see what the commercial version of Nexus would solve here.
Nexus has a setting which prevents you from clobbering an already released artefact in a release repository.
For a team of about a dozen, the free version of Nexus should be enough.

How to migrate WebSphere app with no WAR/EAR file

I am to migrate a Websphere machine (including the applications which run on it) to a new machine. They wanted a clean install of the OS and WebSphere, so I did that. I also took a full file backup of all of the applications they had on the old server. The problem is that to re-install them on the new server, the WebSphere dialog asks me for the JAR/EAR/WAR file, which I don't have.
Is there any reasonably easy way to simply extract the backup of the WebSphere application files I have taken from the old maching, and simply configure the new machine to use them? WAR, etc. is a nice feature to have, but to be forced to use it seems silly.
Edit: The existing WebSphere server is still up and running in production.
Edit: The old server is WAS 3.5, which means it doesn't even have an export function, sadly. Also, the directory where it actually runs the content from has a completely different structure (consisting of like a a %/Web and %/Servlet, where % is the context path of the application). In the "Install" section, it doesn't even mention EAR or WAR, only JAR. I am currently thinking that perhaps the best thing to do might be to just copy the directory over to another WAS 3.5 system and then upgrade that system (and hope it converts the folder structure and updated the config as part of the upgrade).
Edit: The closest thing I have found to a solution so far is this link:
http://www.javazoom.net/services/newsletter/was4.html (though I am not sure if that tool is available or relevant for WAS 7.x).
This has to be a problem other people have run into before, but I can't find a solution anywhere on the WEB.
Thank you!
Here do they have sample Jacl scripts one can use to export/import appserver's configuration. So that is what you can start with. If your new bow uses the same version of WAS (and the same topology if it is not a standalone box) as the old one, it might be a (relatively) safe process.
Migration between different versions of Websphere might be somewhat more tricky, but I'm sure IBM published at least one redbook on that topic.
If you still have the old server running, than just export the apps and you have the war/ear files. However, If you don't know the configuration for the apps, you are screwed. However, I am sure IBM has tools that you can use. Some of the paid tools look even nice and user friendly (at least according to their sales demos). I can't tell you what you need, since I don't know what documentation you have for your apps. But as it looks like there is not much there, otherwise you would just install the application the same way they were installed on your old server and use the binaries (war, ear, jar) that are archived somewhere.

How to automate development environment setup? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Every time a new developer joins the team or the computer a developer is using changes, the developer needs to do lots of work to setup the local development environment to make the current project work. As a SCRUM team we are trying to automate everything including deployment and tests so what I am asking is: is there a tool or a practice to make local development environment setup automated?
For example to setup my environment, first I had to install eclipse, then SVN, Apache, Tomcat, MySQL, PHP. After that I populated the DB and I had to do minor changes in the various configuration files etc... Is there a way to reduce this labor to one-click?
There are several options, and sometimes a combination of these is useful:
automated installation
disk imaging
virtualization
source code control
Details on the various options:
Automated Installation Tools for automating installation and configuration of a workstation's various services, tools and config files:
Puppet has a learning curve but is powerful. You define classes of machines (development box, web server, etc.) and it then does what is necessary to install, configure, and keep the box in the proper state. You asked for one-click, but Puppet by default is zero-click, as it checks your machine periodically to make sure it is still configured as desired. It will detect when a file or mode has been changed, and fix the problem. I currently use this to maintain a handful of RedHat Linux boxes, though it's capable of handling thousands. (Does not support Windows as of 2009-05-08).
Cfengine is another one. I've seen this used successfully at a shop with 70 engineers using RedHat Linux. Its limitations were part of the reason for Puppet.
SmartFrog is another tool for configuring hosts. It does support Windows.
Shell scripts. RightScale has examples of how to configure an Amazon EC2 image using shell scripts.
Install packages. On a Unix box it's possible to do this entirely with packages, and on Windows msi may be an option. For example, RubyWorks provides you with a full Ruby on Rails stack, all by installing one package that in turn installs other packages via dependencies.
Disk Images Then of course there are also disk imaging tools for storing an image of a configured host such that it can be restored to another host. As with virtualization, this is especially nice for test boxes, since it's easy to restore things to a clean slate. Keeping things continuously up-to-date is still an issue--is it worth making new images just to propagate a configuration file change?
Virtualization is another option, for example making copies of a Xen, VirtualPC, or VMWare image to create new hosts. This is especially useful with test boxes, as no matter what mess a test creates, you can easily restore to a clean, known state. As with disk imaging tools, keeping hosts up-to-date requires more manual steps and vigilance than if an automated install/config tool is used.
Source Code Control Once you've got the necessary tools installed/configured, then doing builds should be a matter of checking out what's needed from a source code repository and building it.
Currently I use a combination of the above to automate the process as follows:
Start with a barebones OS install on a VMWare guest
Run a shell script to install Puppet and retrieve its configs from source code control
Puppet to install tools/components/configs
Check out files from source code control to build and deploy our web application
I stumbled across this question and was very suprised that no one has mentioned Vagrant yet.
As Pete TerMaat and others have mentioned, virtualization is a great way to manage and automate development environments. Vagrant basically takes the pain away from setting up these virtual boxes.
Within minutes you can have a completely fresh copy of your favourite Linux distro up and running, and provisioned exactly the same way your production server is.
No more fighting with OSX or Windows to get PHP, MySQL, etc. installed. All software lives and runs inside the virtual machine. You can even SSH in with vagrant ssh. If you make a mistake or break something, just vagrant destroy it, and vagrant up to start over fresh.
Vagrant automatically creates a synced folder to your local file system, meaning you don't need to develop within the virtual machine (ie. using Vim). Use whatever your editor of choice is.
I now create a new "Vagrant box" for almost every project I do. All my settings are saved into the project repository, so it's easy to bring on another team member. They simply have to pull the repo, and run vagrant up, and they are literally ready to go.
This also makes it much easier to handle projects that have different software requirements. Maybe you have some projects that rely on PHP 5.3, but some newer ones that run PHP 5.4. Just install the version you want for that project.
Check it out!
One important point is to set up your projects in source control such that you can immediately build, deploy and run after checkout.
That means you should also checkin helper infrastructure, such as Makefiles, ant buildfiles etc., and settings for the tools, such as IDE project files.
That should take care of the setup hassle for individual projects.
For the basic machine setup, you could use a standard image. Another option is to use your platform's tools to automate installation. Under Linux, you could create a meta-package that depends on all the packages you need. Under Windows, a similar thing should be possible using MSI or the like.
Edit:
Ideally, instead of checking in helper infrastructure, you check in the information that allows the build to generate the helper infrastructure. This is the approach taken by e.g. the GNU build system (autotools etc.), or by Maven. This is even more elegant, because you can (theoretically) generate infrastructure for any (supported) build environment, thus you are not bound to e.g. one specific IDE, and settings in the helper infrastructure (paths etc.) don't need to duplicate the main project settings.
However, this also a more complex approach, so if you can't get it to work, I believe checking in stuff like IDE files directly is acceptable.
I like to use Virtual PC or VMware to virtualize the development environment. This provides a standard "dev environment" that could be shared among developers. You don't have to worry about software that the user could add to their system that may conflict with your development environment. It also provides me a way to work to two projects where the development environments can't both be on one system (using two different versions of a core technology).
Use puppet to configure both your development and production environment. Using a top-notch automation system is the only way to scale your ops.
There's always the option of using virtual machines (see e.g. VMWare Player). Create one environment and copy it over for each new employee with minimal configuration needed.
At a prior place we had everything (and I mean EVERYTHING) in SCM (clearcase then SVN). When a new developer can in they installed ClearCase|SVN and sucked down the repository. This also handles the case when you need to update a particular lib/tool as you can just have the dev teams update their environment.
We used two repo's for this so code and tools/config lived in separate places.
I highly recommend Blueprint from DevStructure. It's open-source and your use case is actually the exact reason we originally wrote the software. Our goals have somewhat changed, but it still is the perfect tool for what you are describing. In short, you can create reusable server configs - dead simple configuration management. I hope this helps!
https://github.com/devstructure/blueprint (Blueprint # Github)
I've been thinking about this myself. There are some other technologies that you could throw into the mix. Here's what I'm currently setting up:
PXE based pre-seeded installation images (Debian Squeeze). You can start up a bare-metal machine (or new virtual appliance) and select the image from the PXE boot menu. This has the major advantage of being able to install your environment on physical machines (in addition to virtual appliances).
Someone already mentioned Puppet. I use CFEngine but it's a similar deal. Essentially your configuration is documented and centralized in policy files which are continually enforced by an agent on the client.
if you don't want a rigid environment (i.e. developers may choose a combination of tool-sets) you can roll your own deb packages so new devs can type sudo apt-get install acmecorp-eclipse-env or sudo apt-get install acmecorp-intellij-env, for example.
Slightly off-topic, but if you run a Debian based environment (i.e. Ubuntu), consider installing apt-cacher (package proxy). In addition to saving bandwidth, it will make your installations much faster (since packages are cached on your local network).
If you're using OSX and working with Rails. I'd suggest either:
https://github.com/platform45/let-there-be-light
https://github.com/thoughtbot/laptop
If you use machines in a standard configuration, you can image the disk with a fresh perfectly configured install -- that's a very popular approach in many corporations (and not just for developers, either). If you need separately configured OS's, you can tar-bz2 all the added and changed files once a configured OS is turned into your desired setup, and just untar it as root to make your desired environment from scratch.
if you're using a linux flavor, you've probably got a package management system: thinks .rpm for fedora/redhat, or .deb for ubuntu/debian. many of the things you describe already have packages available: svn, eclipse, etc. you could roll your own packages for company specific software, create a repository (perhaps only available on the local network) and then your setup could be reduced to a single bash script which would add the company repo to /etc/apt/sources.list (debian/ubuntu) and then call a command like,
/home/newhire$ apt-get update && apt-get install some complete package list
you could use buildbot to then automate regular builds for company packages that change often.
Try out DevScript at http://nsnihalsahu.github.io/devscript .
Its one command like ,
devscript lamp or devscript laravel or devscript django . In around a few minutes ,depending on the speed of your internet co