Develop an application with all its containers instantiated and used soon as the dev compilation, and do not wait deployment at delivery time for that - ide

In my development environment, I have my IDE, a database, web-server... installed
A script exist: 80 different commands are ran for it.
Then, at delivery time (integration, acceptance), I have a big mess to execute a script that create many Docker containers, each having its goal: database, web-server, etc.
Their scripts are some subsets of the big one I'm using for my own local developer computer. But adapted.
It's very difficult to ensure the transition between my standalone - "flat" if I can say so - dev computer and the containerized version fitted for delivery.
I wonder if a way exists to develop directly an application being containerized at its early beginning :
With all the tree of its containers ready (and not a single one containing everything: it would be cheating...)
As soon as I compile my sources in my IDE : simple compilation would have for result binaries and files going in their due container
and it's in these containers that my application would be executed, even in development mode.
Is it possible? Is it already done by some of you?
Or does it have too much drawbacks to be attempted?

Related

What use cases of Docker on real projects

I have read what the Docker is but having hard time finding of what are the real scenarios of using Docker?
It would be great to see here your usages.
I'm replicating production environment with it, on commit on project with jenkins after building binaries i deploy there, launch the required daemons and run integration tests, all in a very short time (a few seconds over the time that takes the integration tests). Having no need to boot, and little overhead on memory/cpu/disk is great for that kind of things.
I could extend that use for development (just adding a volume where the code resides to my git repository, at least for scripting languages) to have the production environment with the code im actually editing, at a fraction of what virtualbox would require.
Also needed to test how to integrate some 3rd party code into a production system that modified DB. Cloned the DB in a container, installed the production system in another, launched both and iterated the integration until i did it well, going back to zero to try again in seconds, and faster, cheaper and more scriptable than doing it with VMs+snapshots.
Also run several desktop browser instances on containers, with their own plugins, cookies, data storage and so on separated. The docker repository example for desktop integration is a good start for it, but planning to test subuser to extend this kind of usage.
I've used Docker to implement a virtualized build server which any user could ask to run a build off their personal git branch in our canonical environment.
Each SSH connection made to the server was connected to a new container, ensuring that all builds were isolated from each other (a major pain point in the past), ensuring that the container's state couldn't be corrupted (since changes were all isolated to that single instance), and ensuring that even developers on platforms such as Windows where Docker (and other tools in our canonical build environment) couldn't be run locally would be able to run builds.
We use it for the following uses:
We have a Jenkins Container which we can use to bring up our Jenkins server. We mount the workspace using volumes so we can migrate the server easily just by copying the files and launching the container somewhere else.
We use a Jetty container to easily deploy our war files in our production and development environment.
We use a whole host of other monitoring tools such as Uptime which we have containers for so that we can bring them up and down on various hosts with a single command.
I use docker to build and test our software on several different Linux distributions (RHEL 4/5/6/7, Ubuntu 12.04, 14.04).
Docker makes it easy and fast to create minimalistic and consistent build environments.
Docker gives you the benefits that other virtualization solutions give you to a fraction of the recourse needed.

With Continuous Integration, why are tests run after committing instead of before?

While I only have a github repository that I'm pushing to (alone), I often forget to run tests, or forget to commit all relevant files, or rely on objects residing on my local machine. These result in build breaks, but they are only detected by Travis-CI after the erroneous commit. I know TeamCity has a pre-commit testing facility (which relies on the IDE in use), but my question is with regards to the current use of continuous integration as opposed to any one implementation. My question is
Why aren't changes tested on a clean build machine - such as those which Travis-CI uses for post-commit tesing - before those changes are committed?
Such a process would mean that there would never be build breaks, meaning that a fresh environment could pull any commit from the repository and be sure of its success; as such, I don't understand why CI isn't implemented using post-commit testing.
I preface my answer with the details that I am running on GitHub and Jenkins.
Why should a developer have to run all tests locally before committing. Especially in the Git paradigm that is not a requirement. What if, for instance, it takes 15-30 minutes to run all of the tests. Do you really want your developers or you personally sitting around waiting for the tests to run locally before your commit and push your changes?
Our process usually goes like this:
Make changes in local branch.
Run any new tests that you have created.
Commit changes to local branch.
Push local changes remotely to GitHub and create pull request.
Have build process pick up changes and run unit tests.
If tests fail, then fix them in local branch and push them locally.
Get changes code reviewed in pull request.
After approval and all checks have passed, push to master.
Rerun all unit tests.
Push artifact to repository.
Push changes to an environment (ie DEV, QA) and run any integration/functional tests that rely on a full environment.
If you have a cloud then you can push your changes to a new node and only after all environment tests pass reroute the VIP to the new node(s)
Repeat 11 until you have pushed through all pre-prod environments.
If you are practicing continuous deployment then push your changes all the way to PROD if all testing, checks, etc pass.
My point is that it is not a good use of a developers time to run tests locally impeding their progress when you can off-load that work onto a Continuous Integration server and be notified of issues that you need to fix later. Also, some tests simply can't be run until you commit them and deploy the artifact to an environment. If an environment is broken because you don't have a cloud and maybe you only have one server, then fix it locally and push the changes quickly to stabilize the environment.
You can run tests locally if you have to, but this should not be the norm.
As to the multiple developer issue, open source projects have been dealing with that for a long time now. They use forks in GitHub to allow contributors the chance to suggest new fixes and functionality, but this is not really that different from a developer on the team creating a local branch, pushing it remotely, and getting team buy-in via code review before pushing. If someone pushes changes that break your changes then you try to fix them yourself first and then ask for their help. You should be following the principle of "merging early and often" as well as merging in updates from master to your branch periodically.
The assumption that if you write code and it compiles and tests are passed locally, no builds could be broken is wrong. It is only so, if you are the only developer working on that code.
But let's say I change the interface you are using, my code will compile and pass tests
as long as I don't get your updated code That uses my interface.
Your code will compile and pass tests as long as you don't get my update in the interface.
And when we both check in our code, the build machine explodes...
So CI is a process which basically say: put your changes in as soon as possible
and test them in the CI server (it should be of course compiled and tested locally first).
If all developers follow those rules,
the build will still break, but we will know about it sooner rather than later.
The CI server is not the same as the version control system. The CI server, too, checks the code out of the repository. And therefore the code has already been committed when it gets tested on the CI server.
More extensive tests may be run periodically, rather than at time of checking in, on whatever is the current version of the code at the time of testing. Think of multi-platform tests or load tests.
Generally, of course, you'll unit test your code on your development machine before checking it in.

Studying web servers such as apache httpd and tomcat

I would like to see how everything is handled behind the scenes behind web servers such as apache httpd and tomcat. How does one go about stepping through these applications, making changes, and then viewing changes?? Applications this complex use scripts for building and I presume they take a while to compile, it seems to me that there would be more to it than simply downloading the source code and importing into Eclipse. Or is it actually that simple?
And how do developers who want to work on the code of these projects get around the fact that it will take a fair amount of time to compile these applications (and other non-trivial applications such as web browsers)? When I am working on smaller stuff I am constantly compiling and then debugging. I imagine that is no feasible when it can take several minutes to compile?
Easy: just read.
http://tomcat.apache.org/tomcat-7.0-doc/building.html
Also, http://wiki.apache.org/tomcat/FAQ/Developing
The current Tomcat 7.0.x trunk takes about 17 seconds to build on my MacBook Pro, and that included downloading a few dependencies that I didn't already have laying around. If you want to re-compile a single .java file, you can re-run the entire build and the toolchain (really just Apache Ant) will figure out which files actually need to be recompiled.
You only modified one source file? Only one source file will be re-compiled when you run ant deploy (you don't even need the "deploy": it's the default). If you use Eclipse or some other similar IDE, it will recompile on the fly and you don't need to worry about the command line or any of that.
If you have further questions, please join the Tomcat users' mailing list (or the developers' list) and join the community.

How to automate development environment setup? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Every time a new developer joins the team or the computer a developer is using changes, the developer needs to do lots of work to setup the local development environment to make the current project work. As a SCRUM team we are trying to automate everything including deployment and tests so what I am asking is: is there a tool or a practice to make local development environment setup automated?
For example to setup my environment, first I had to install eclipse, then SVN, Apache, Tomcat, MySQL, PHP. After that I populated the DB and I had to do minor changes in the various configuration files etc... Is there a way to reduce this labor to one-click?
There are several options, and sometimes a combination of these is useful:
automated installation
disk imaging
virtualization
source code control
Details on the various options:
Automated Installation Tools for automating installation and configuration of a workstation's various services, tools and config files:
Puppet has a learning curve but is powerful. You define classes of machines (development box, web server, etc.) and it then does what is necessary to install, configure, and keep the box in the proper state. You asked for one-click, but Puppet by default is zero-click, as it checks your machine periodically to make sure it is still configured as desired. It will detect when a file or mode has been changed, and fix the problem. I currently use this to maintain a handful of RedHat Linux boxes, though it's capable of handling thousands. (Does not support Windows as of 2009-05-08).
Cfengine is another one. I've seen this used successfully at a shop with 70 engineers using RedHat Linux. Its limitations were part of the reason for Puppet.
SmartFrog is another tool for configuring hosts. It does support Windows.
Shell scripts. RightScale has examples of how to configure an Amazon EC2 image using shell scripts.
Install packages. On a Unix box it's possible to do this entirely with packages, and on Windows msi may be an option. For example, RubyWorks provides you with a full Ruby on Rails stack, all by installing one package that in turn installs other packages via dependencies.
Disk Images Then of course there are also disk imaging tools for storing an image of a configured host such that it can be restored to another host. As with virtualization, this is especially nice for test boxes, since it's easy to restore things to a clean slate. Keeping things continuously up-to-date is still an issue--is it worth making new images just to propagate a configuration file change?
Virtualization is another option, for example making copies of a Xen, VirtualPC, or VMWare image to create new hosts. This is especially useful with test boxes, as no matter what mess a test creates, you can easily restore to a clean, known state. As with disk imaging tools, keeping hosts up-to-date requires more manual steps and vigilance than if an automated install/config tool is used.
Source Code Control Once you've got the necessary tools installed/configured, then doing builds should be a matter of checking out what's needed from a source code repository and building it.
Currently I use a combination of the above to automate the process as follows:
Start with a barebones OS install on a VMWare guest
Run a shell script to install Puppet and retrieve its configs from source code control
Puppet to install tools/components/configs
Check out files from source code control to build and deploy our web application
I stumbled across this question and was very suprised that no one has mentioned Vagrant yet.
As Pete TerMaat and others have mentioned, virtualization is a great way to manage and automate development environments. Vagrant basically takes the pain away from setting up these virtual boxes.
Within minutes you can have a completely fresh copy of your favourite Linux distro up and running, and provisioned exactly the same way your production server is.
No more fighting with OSX or Windows to get PHP, MySQL, etc. installed. All software lives and runs inside the virtual machine. You can even SSH in with vagrant ssh. If you make a mistake or break something, just vagrant destroy it, and vagrant up to start over fresh.
Vagrant automatically creates a synced folder to your local file system, meaning you don't need to develop within the virtual machine (ie. using Vim). Use whatever your editor of choice is.
I now create a new "Vagrant box" for almost every project I do. All my settings are saved into the project repository, so it's easy to bring on another team member. They simply have to pull the repo, and run vagrant up, and they are literally ready to go.
This also makes it much easier to handle projects that have different software requirements. Maybe you have some projects that rely on PHP 5.3, but some newer ones that run PHP 5.4. Just install the version you want for that project.
Check it out!
One important point is to set up your projects in source control such that you can immediately build, deploy and run after checkout.
That means you should also checkin helper infrastructure, such as Makefiles, ant buildfiles etc., and settings for the tools, such as IDE project files.
That should take care of the setup hassle for individual projects.
For the basic machine setup, you could use a standard image. Another option is to use your platform's tools to automate installation. Under Linux, you could create a meta-package that depends on all the packages you need. Under Windows, a similar thing should be possible using MSI or the like.
Edit:
Ideally, instead of checking in helper infrastructure, you check in the information that allows the build to generate the helper infrastructure. This is the approach taken by e.g. the GNU build system (autotools etc.), or by Maven. This is even more elegant, because you can (theoretically) generate infrastructure for any (supported) build environment, thus you are not bound to e.g. one specific IDE, and settings in the helper infrastructure (paths etc.) don't need to duplicate the main project settings.
However, this also a more complex approach, so if you can't get it to work, I believe checking in stuff like IDE files directly is acceptable.
I like to use Virtual PC or VMware to virtualize the development environment. This provides a standard "dev environment" that could be shared among developers. You don't have to worry about software that the user could add to their system that may conflict with your development environment. It also provides me a way to work to two projects where the development environments can't both be on one system (using two different versions of a core technology).
Use puppet to configure both your development and production environment. Using a top-notch automation system is the only way to scale your ops.
There's always the option of using virtual machines (see e.g. VMWare Player). Create one environment and copy it over for each new employee with minimal configuration needed.
At a prior place we had everything (and I mean EVERYTHING) in SCM (clearcase then SVN). When a new developer can in they installed ClearCase|SVN and sucked down the repository. This also handles the case when you need to update a particular lib/tool as you can just have the dev teams update their environment.
We used two repo's for this so code and tools/config lived in separate places.
I highly recommend Blueprint from DevStructure. It's open-source and your use case is actually the exact reason we originally wrote the software. Our goals have somewhat changed, but it still is the perfect tool for what you are describing. In short, you can create reusable server configs - dead simple configuration management. I hope this helps!
https://github.com/devstructure/blueprint (Blueprint # Github)
I've been thinking about this myself. There are some other technologies that you could throw into the mix. Here's what I'm currently setting up:
PXE based pre-seeded installation images (Debian Squeeze). You can start up a bare-metal machine (or new virtual appliance) and select the image from the PXE boot menu. This has the major advantage of being able to install your environment on physical machines (in addition to virtual appliances).
Someone already mentioned Puppet. I use CFEngine but it's a similar deal. Essentially your configuration is documented and centralized in policy files which are continually enforced by an agent on the client.
if you don't want a rigid environment (i.e. developers may choose a combination of tool-sets) you can roll your own deb packages so new devs can type sudo apt-get install acmecorp-eclipse-env or sudo apt-get install acmecorp-intellij-env, for example.
Slightly off-topic, but if you run a Debian based environment (i.e. Ubuntu), consider installing apt-cacher (package proxy). In addition to saving bandwidth, it will make your installations much faster (since packages are cached on your local network).
If you're using OSX and working with Rails. I'd suggest either:
https://github.com/platform45/let-there-be-light
https://github.com/thoughtbot/laptop
If you use machines in a standard configuration, you can image the disk with a fresh perfectly configured install -- that's a very popular approach in many corporations (and not just for developers, either). If you need separately configured OS's, you can tar-bz2 all the added and changed files once a configured OS is turned into your desired setup, and just untar it as root to make your desired environment from scratch.
if you're using a linux flavor, you've probably got a package management system: thinks .rpm for fedora/redhat, or .deb for ubuntu/debian. many of the things you describe already have packages available: svn, eclipse, etc. you could roll your own packages for company specific software, create a repository (perhaps only available on the local network) and then your setup could be reduced to a single bash script which would add the company repo to /etc/apt/sources.list (debian/ubuntu) and then call a command like,
/home/newhire$ apt-get update && apt-get install some complete package list
you could use buildbot to then automate regular builds for company packages that change often.
Try out DevScript at http://nsnihalsahu.github.io/devscript .
Its one command like ,
devscript lamp or devscript laravel or devscript django . In around a few minutes ,depending on the speed of your internet co

How do I set up a build server on the cheap/free?

Currently I'm tasked with doing the daily build. We have an ASP.NET 2005 website with a SQL Server 2005 backend. Our current source control is Visual Source Safe 2005.
At this point, I use the brute-force method of daily builds.
Get Latest version of source code
Get Latest version of Database release script
Backup old website files to a directory
Publish new code to my local machine
Run on my server to keep the test/stage site working
Push newly created files to the website
Run SQL Script on test database (assuming updates, otherwise I don't bother)
Test website on the Test Server.
Looking at the idea of automated builds intrigues me since it means that I do less each morning. How would you recommend I proceed? I want to have a fully fleshed out idea before I present it to my boss.
Ditch VSS, move to Subversion, and check out CruiseControl.NET. Alternatively, if you have a MSDN developer license, you can run TFS workgroup edition and set up a build server on any old XP box. Its what we do at our shop.
As Assaf noted, you can use CC.NET with VSS directly. Nice.
TeamCity has worked well for me. It has a very simple setup. Combine it with an MsBuild script for your operations and you're auto-matic.
For build management I wholeheartedly recommend TeamCity. It doesn't require IIS6 (like CC.net does) since it runs on it's own copy of Tomcat and the setup is all done thru various forms. This is a big deal to me since the build server is just an XPPro box. It integrates well with SVN and there is no crazy XML file manipulation like I had to do with CruiseControl.Net. Big win for me.
For a build runner we use NAnt to send emails to various people, copy the packaged builds where they're supposed to go, run NUnit and NCover, and deploy the software to our web farm.
For automated testing we use Watin.
http://www.nunit.org/index.php
http://www.jetbrains.com/teamcity
http://ncover.sourceforge.net/
http://subversion.tigris.org/
http://nant.sourceforge.net/
http://watin.sourceforge.net/
Try CruiseControl.Net. It's free, and whatever customized daily/continuous routine you want it to perform you can always add with scripts.
Remember, it's not just about daily (nightly) builds, but also about letting you catch build errors in time (since it continuously builds after every source commit/check-in). You don't necessarily test every code chance on every possible platform and build configuration, but CC can do exactly that for you (in the background).
http://confluence.public.thoughtworks.org/display/CCNET/Visual+Source+Safe+Source+Control+Block
All of what you are doing can be performed by a set of batch files, depending on how automated your test environment is. The main batch file can be started as a 'scheduled task' at midnight or whatever. That's how we 'do it cheap' here and at other places I've worked. If you need help with a particular batch, I can provide a sample.
I second (or third) the reccomendation for Subversion/CruiseControl.net. Also, if it is appropriate, check out hosted services for SVN like CVSDude. You'll probably become well versed with MSBuild in the process too. Once you get it setup it is great.
The cost doesn't come from licensing of the tools or even hardware necessarily, but from your time building and maintaining the system - and depending on what you are doing, that could become significant.
Start with the basics and incrementally improve it over time. Like anything else, if you try to come out of the gate with lots of automation and functionality you could find yourself mired in it fulltime for weeks.
Whatever tools you use, house them in a virtual machine (ie., vmware).
When the equipment inevitably goes south, you can copy the image onto any machine and not miss a beat because your build server decided to take the day off, assuming of course, you back up.