What to know before setting up a new Web Dev Env? - development-environment

Say you want to create a new environment for a team of developers to build a large website on a LAMP stack.
I am not interested in the knowledge needed for coding the website (php,js,html,css,etc.). This stuff I know.
I am interested in what you need to know to setup a good environment and workflow with test server, production sever, version control, backups, etc.
What would be a good learning path?

As someone who has lead this process at several companies, my recommendation is to gradually raise the "maturity" of your organisation as a software factory by incrementally consolidating a set of practices in an order that makes sense to your needs. The order I tend to follow (starting with things that I consider more basic, to the more advanced stuff):
Version control - control your sources. I used to work with SVN but I'm gradually migrating my team to Mercurial (I agree to meagar's recommendation for a distributed VCS). A great HG tutorial is in hginit
Establish a clear release process, label your releases in VCS, do clean builds in a controlled environment, test and release from these.
Defect tracking - be systematic about your bugs and feature requests. I tend to use Trac because it gives me a more or less complete solution for project management plus a wiki that I use as a knowledge base. But you have choices galore (Jira, Bugzilla, etc...)
Establish routine Testing practices. Unit tests e.g. by using one of the xUnit frameworks (make it a habit to at least write unit tests for new functions you write and old code you modify) and Integration / System tests (for webapps use some tool like Selenium).
Make your tests run frequently, as a part of an automated build process
Eventually, write your tests before you code (Test-Driven Development) and strive to increase coverage.
Go a step forward in your build/test/release cycle by setting up some continuous integration system (to make sure your build and tests are run regularly, at least nightly). I recently started using Hudson and it is great for our Java/Maven projects, but you can use it for any other build process as well
In terms of testing environments, I agree with meagar's recommendations. We have these layers:
Test at developers workstations (should contain a full setup to run your code)
Staging environment: clone your production environment as closely as possible and deploy and run your app there. We also use VMs.
Production preview: we deploy our app to the production servers with production data but in a different "preview" URL for our internal use only. We run part of our automated Integration tests against this server, and do some additional manual testing with internal users
Production - and keep fingers crossed ;)
In terms of backup, at least for your source code, distributed VCS give you the advantage that your full repos are replicated in many machines, thus minimising the risk of data loss (which is much more critical with centralised repos as is the case with SVN).

Before you do anything else, ask your developers what they want out of a test/production environment. You shouldn't be making this decision, they should. The answer to this depends entirely on what kind of workflow they're familiar with and what kind of software they'll be developing.
I'd personally recommend a distributed VCS like git or mercurial, local WAMP/LAMP stacks on each developer's workstation (shared "development" servers are silly) and a server running some testing VMs which are duplicates of your production environment. You can't ask for more specific advice than that without involving your developers.

Related

what is difference between sandbox and staging environments?

If the staging environment is an isolated environment for testing by testers and also sandbox is an isolated environment for testing, So what are those differences?
Actually I could not find any useful and clear information on.
Good question. Given the background you provide they appear the same. This is true in that they are both isolated from the production environments. They should not contain production data. etc. However there are a number of differences particularly in how they are used.
Staging environment
A good staging environment will be a close replica (less the data) of the production system. It is used to test upgrades and patches prior to going to production. This means that it should be a controlled environment where the engineers responsible for the production deployment are allowed to test the rollout instructions.
Access restrictions in a staging environment should be as close to production as possible. I.E. deployment by those engineers who are responsible for deployment. No root (or privileged access for developers).
Sandbox environment
As the name suggests this typically a playground for the engineering team. It has less restrictions than a staging environment because it is designed to allow the engineers to try out things easily and quickly. A sandbox environment is likely to drift away from the production environment as engineers try out different versions of the product, dependancies, plugins etc.
Access to a sandbox environment typically allows privileged access to any engineer (developer, QA etc.) who are working on the project for easy / quick deployment and debugging.

Docker- what real value does it bring for our team?

I am very very new to Docker. Our team has had a very nice deployment line up where We have different CI engines for different projects including Jenkis and TeamCity.
Developers usually check-in and CI takes over, deploys and its perfectly ready for test team to test. I always thought this to be a perfect model. Of course, some parts and our implementation have their flaws but it worked very well for what we wanted.
Now, our Dev-Ops is introducing Docker where test teams get a Docker Image from Docker Registry Everytime we run a build from teamcity. While it sounds really really fancy I am still failing to understand the benefit of it.
After my research, my conclusion was that Dockers can be a good light weight replacements for VM. BUT that is ONLY IF you are using any VMs? We are not using any VMS? I just do not understand what is the real value here? Also, while searching I found a relatively good link on Docker:
https://www.ctl.io/developers/blog/post/what-is-docker-and-when-to-use-it/
Where they discuss when you should use Docker and one of the point says that:
Use Docker whenever your app needs to go through multiple phases of development (dev/test/qa/prod, try Drone or Shippable, both do Docker CI/CD)
Ok. Howeve rthey do not further elaborate on why is docker useful when my app has to go through multiple phases?
And how it is exptremely helpful over regular Dev/Test set up when the existing set up is already working smooth?
First, you are right about comparing it to VMs in that it is similar to a VM. However, docker is incredibly lightweight. This property is the one that surprised me most in the beginning. As opposed to virtual machines, containers share resources much more efficiently. Virtual machines are isolated. Containers can run simultaneously on a host machine with very little overhead. You can configure containers to be able to talk to each other (via volume or port bindings).
Furthermore, in my team, docker brings the following benefits:
our application consists of one big application and several other few microservices. But we want to release all as one package with inter-dependencies among the applications, which eliminates problems with figuring out which version of application and microservices should be deployed together (compatiblity) etc. That is, the image contains all you need and you can bring all applications or one-by-one up/down using docker-compose. You do not need to deploy, you simply pull the image and fire a container/s. If you wish to stop one of the microservices, it can be done without affecting the others.
developers in the team, can run the very same image on local machine, for example to troubleshoot a problem occurred in the production; which means troubleshooting can be done in the same environment as in the production. This brings environment standardization and no more "but it works on machine" talk.
another benefit it brings to us is the following: we build a docker image, run our tests against it, and push it to the registry once all these phases succeed, which translates into a great portability.
Ability to version control the containers. You can easily inspect containers between the current version and the previous versions. If you wish to rollback - that is done smoothly.
Isolating and securing applications. All containers are isolated and you can easily control what goes in and out.
It took me a year before I got used to the idea, but now it seems simple enough.
I think part of that comes from the fact that people keep calling Docker a "virtual machine", which is not accurate. That's really just a nickname for what's happening behind the scenes. In a lot of ways, Docker will NOT replace a complete virtualization solution, such as VMWare. It does, however, bring forth a new way of thinking about infrastructure. One that many people have a difficult time wrapping their heads around.
You can start asking yourself: What makes a Linux distribution unique?
Aside from the kernel, everything else is just a "standard way" of organizing binaries, libraries, runtime and configuration files. You need your binaries in /bin, your libs in /lib, your configuration in /etc. User installations get placed under /usr...
Most distributions will keep the main structure from the Unix legacy and add its own quirks. Each one will have its own way to manage and distribute packages. Each will maintain their own versions of libraries, drivers, etc.
The key ingrident is the kernel. That's something they all have in common. Nowadays, recent builds of the Linux kernel are compatible with pretty much all major distributions available. So, aside from /boot, most of everything else is just a matter of having the right files in the right place with the right permissions.
Now, imagine you take all that distribution bundle (except the kernel) and place it all in another directory of your running OS. Taking advantage of the same kernel you are already running, you isolate a new process so that it "thinks" that / is now that directory. Bingo! This process now "thinks" it's running all by itself on another operating system.
Docker builds on top of Linux Containers, which allows us to do excatly that, but in a more friendly and easier way. Don't think of it as a virtual machine. Think of it as process isolation. The running kernel will share the machine's resources with this process, while keeping it isolated from the rest of the system. It's like jails on steroids.
That was a broad simplification. But, given the concept, think about the implications of this idea.
You can have on the same host, multiple processes with completely different environments that might otherwise conflict with each other. One may be a legacy binary that needs old libraries in place (legacy systems that never die). Another may be the most recent build of a bleeding edge technology. Sharing the same kernel is a efficient, and valuable resource management.
The most value I found comes from managing the infrastructure. Once you install Docker on the hosts, configure a swarm, and define a way of deploying containers, you mostly forget about the hosts. Adding users, installing packages, customizing, editing configuration files... All that becomes a development task on your desktop. There's an incentive to script more, to automate more. To keep your hands away from the physical or virtual machines, unless absolutely necessary.
Gone are the days when someone changed some obscure setting on the server to work around some weird application behavior, forgot to tell anyone about it and took a vacation. Changes to the environment can be commited to version control, tracked and improved by everyone on the team. If your datacenter goes through a disaster, recreating the whole environment is a matter of rebuilding images and redeploying containers. Your infrastructure becomes consistent and reproducible, while keeping the doors open to a wide variety of operating systems and customized configurations for each application.
Developers can take advantage of Docker with the ability of recreating dev/staging/production environments on their desktops. No need to polute a dev machine with application servers and database installations, or even the toll of Virtual Box to emulate all that.
Testing can be automated with a higher level of isolation. The Selenium team already has official Docker images. Creating an entire test hub should be a walk in the park with those puppies.
Building custom software, such as compiling Nginx with third party modules, can also be done inside containers from specialized images. No need to keep an entire server dedicated to it, or even polute your desktop with all the dependencies and build packages.
Overall, we've been having a great experience with Docker. We've migrated our staging environment to this new platform, and plan to migrate other parts of the infrastructure as well, eventually into production. So far, so good.
I hope you can convince enough people to take a better look at it. I'll admit, it took me sometime to get used to the idea. But once you get it, it's actually worth it.

Deployment Environment Responsibility [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This might be not technical but rather process driven query. Help me redirect to right forum if this is not the place to ask such question.
Typically in a project, we have a deployment environment where development team deploy the code for testing purpose. Testing team executes test cases on the environment.
But I have seen projects where there are multiple environments for different teams to test on, and when I get to understand that what's the point? I do not see any reason than to have multiple environments.
Two environments:
1. Lower Environment - Developers can use this environment to test their code( this environment will be an exact replica of higher environment where Internal and External testing will happen)
2. Higher Environment - where multiple testing team can test which from my experience seems stable env. to test on.
But I see multiple environment where testing happens with no apparent concrete reason. My question is who's responsibility to support multiple environments? I find it difficult for development team to work on supporting multiple environments apart from regular dev activities, Unit test case preparation, get clarification from the design or business on User story.
Any suggesstion would be highly appreciated.
My question is who's responsibility to support multiple environments?
Depending upon the size and roles that you have in the team, responsibility would usually lie with either one of the developer, tester, or release manager.
I find it difficult for development team to work on supporting multiple environments apart from regular dev activities
Deployments across environments can be and should be automated. Make sure that a proper source control tool is in place and all developers check-in the code there. Scripts can be created once and used for every deployment. There are Continuous Integration tools available which can help with automated deployment as well by fetching the code from source control repository and making an application build from it. This will save everyone's time and minimize human errors.
Release management best practices involve setting up different environments which are mainly:
Development
Test/QA
Staging
Production/Live
Development environment:
This is where development team run their code. It could be the developer's workstation or a server shared by several developers working together on the same project. This environment is frequently updated by the developers, so testers may not be able to use this environment for executing proper test cycles. Anyone from the dev team can update it. This is what you have termed as a Lower Environment.
Test environment:
A place where testing can be conducted. This would include functional testing as well as performance testing in a physical environment with hardware and software that is identical to the production environment. This environment is less frequently updated and provides a common platform for testers to report bugs on. Testers would not have to worry about frequent updates (by developers) and at the same time developers would not have to wait for the test cycle to complete so that they can update this environment. Updates to this environment should be scheduled and properly announced to the related group. Updates to this environment should be automated but controlled by the QA team/manager.
Staging environment:
This is preferred to be a mirror of production setup. It is used as a pre-production location and contains the Release Candidate -- the next version of the application, planned to be made live. Final testing and client/manager approvals are done here. It is used to verify installation, configuration, and migration scripts / procedures, before they are applied to production environment. The software installed here should closely match the software installed in the production environment. However, it maybe acceptable that the hardware capability of staging environment can be lesser since this environment is not to be used for measuring performance. Updates to this environment are not frequent and highly controlled (done usually by the release manager). On some projects, the developer, the release manager, and QA tester can actually be the same person but it should be clear that the roles are different.
Production environment:
This is the live environment which is available to the all the end users. Similar to staging, only selected group should be able to update the live setup. A developer should not be making any changes directly to the staging or production environments.
Hi on my previous work we also have different environment but here is the thing... on your case you have two environment which is lower and higher which is good.
my suggestion is you need a dedicated person who will be in-charge in all deployment. maybe you can call them "deployment person/team" , he will be the one to check all standards such as coding etc. of all developers and he will be the one to deploy in testing site (QA testing site) so that development team can focus on their task.
to be able to achieve that , you can use centralize repository for all developers such as Tortoise SVN etc.
Also , all developers can do the checking on their local computer.. if the program still have errors they are not allow to commit the changes in the repository if the errors not yet fixed to avoid deployment issues. If everything is fine, developers can now commit the codes in the repository, then the deployment team will check, if everything is good to both QA and Dev team, the deployment team can now deploy.
Hope it helps.
Regards.

Suggestions for software to ease setting up a build server

I'm currently setting up a new build server and I'm interested in any suggestions the community may have about software such as Hudson or CruiseControl.NET that may simplify and add additional value to the build process.
Previously I had a build server set up using custom batch files which would run msbuild and other such tools and these were triggered by subversion hooks to allow for a continuous builds to be done per branch. The idea was that eventually we would also execute automated tests and/or static analysis although we never really got that far. This server also acted as our source code repository, a test machine for web project builds, and a web server for custom dashboard and portal for developers on the team.
At this point my thoughts are to separate some of the responsibilities of the old build server and at least a Build Server which is responsible only for creating builds, a web server which is responsible for acting as the intranet style dashboard site for developers, and perhaps an additional web server as the Subversion repository. If it turns out to be better or easier to keep the Subversion code on the same server as SvnServe then I'll probably opt to place the Subversion repository on the web server but still keep the build server separate. Having no personal experience with any of the popular build server and CI solutions out there I'm curious how CruiseControl.NET, Hudson or other solutions would fit into this type of configuration. It appears that both of CC.NET and Hudson have web interfaces for example but the documentation doesn't clearly layout how this plays out with different hardware/system configurations so I'm not sure if either requires the web portion to be on the build server itself or not.
As far as technologies I'm dealing with .NET/C# based code which is a mix of Web/WinForms/WPF and we use a few separate Subversion repositories to host these projects. Additionally it would be nice to support Visual FoxPro and Visual Source Safe for some legacy applications. I would also like to get more team members involved in monitoring builds and would like to eventual have developers create build setups for their own projects as well with as much simplicity as possible. Also I should mention that I have no experience setting up a Java based web application in IIS but I do have quite a bit of experience setting up and managing ASP.NET applications so if that may make .NET based products more favorable unless I can be convinced otherwise.
UPDATE (after researching Hudson): After all the recommendations for Hudson I started looking into what is involved to get it up and running on my two Windows 2008 servers. From what I can gather the web portion (master) would run on my webserver but it seems that IIS isn't supported so this would greatly complicate things since I want to host it on the same machine as my other web applications. On the build server, I would be installing a second copy of Hudson that would act as a slave and only perform builds that are delegated to it by the master. To get this to work I would be installing Hudson as a Windows Service and would also need to install some unix compatibility utilities. Unfortunately the UnxUtils download link appears to be broken when I checked as well so I can't really move forward until I get that resolved. All of this is really sounding just as complex if not more complex than installing CruseControl.NET. For now this unfortunately leaves me to looking into CruiseControl.NET and TeamCity.
UPDATE (about TeamCity): After looking into TeamCity a little closer I realized that at least the server portion is also written in Java and is deployed in a manner very similar to Hudson. Fortunately it appears that Tomcat can be used to host servlets inside IIS although I can't find a good straight forward guide to describe how to actually do accomplish this. So skipping that for now I looked further when I ran into what looks like what might be a major snag.
TeamCity Professional edition only
supports TeamCity Default
Authentication and does not support
changing the authentication scheme.
Since windows authentication is likely the direction we will want to go, it's now looking like it might be back to evaluating CruiseControl.NET or possibly Hudson if I can get my hands on the UnxUtils and also find out more about how I can host the dashboard portion of Hudson within my existing IIS configuration. Any pointers?
UPDATE (about Jenkins): I ended up experimenting enough with Hudson that I ended up with a reasonable build server setup that I'm happy with and that can be extended to do much more if I need. Of course I went the rout of converting to Jenkins once Oracle took over Hudson and Jenkins is what I'm using today with little bits of powershell to help tie things together. I'm very happy with this approach right now and besides being Java based, Jenkins has quite a bit of support for other development environments such as .NET and MSBuild.
I'd vote for TeamCity here. Its is very, very easy to get stood up and running, integrates with all your .NET stuff without any trouble. The builds themselves are run by agents which can be on the build server or another machine depending on requirements--they could even be on a machine running an entirely different OS on a different network in a different country.
I highly recommend using Hudson. Not only will it allow you to build .NET applications on a continual basis, but you can also run code analysis and unit tests as well. It's easy to install (just deploy a WAR file to a web server such as Tomcat) and has many configuration options. There is also a large number of plugins available that you can use, many written by other Hudson users. Best of all, it is free and actively supported.
For our decision making process we started with following overview.
http://confluence.public.thoughtworks.org/display/CC/CI+Feature+Matrix
Our main objective was java, easy to configure/use even after nobody created a job for 6 months. We moved away from a old version of Cruise Control, since nobody really knew how to use it. Some of the commercial products are nice if you want to go beyond just continuous integration. Have a look and decide for yourself.
Be careful, I don't know how up to date this matrix is. So some of the projects might have implemented more functions right now.
An interesting alternative could be Jira studio by Atlasian. If you use the hosted version you don't have much on support issues and it comes with subversion, bamboo, and goodies (jira+greenhopper, confluence, crucible, fisheye). http://www.atlassian.com/hosted/studio/
I agree with Wyatt Barnett. TeamCity is the best choice. It is very easy to configure and use. Moreover, TeamCity has a Free Professional Edition. Previously we used CruiseControl.NET on our project. This is also a powerful tool, but it is very complicated and hard to understand.
What s.ermakovich said: Both TeamCity and Hudson separate the web UI from build agents. You shouldn't need to install IIS on a build agent. You'd need to install a JVM and the agent software on any build node - very straightforward.

In which practical ways can virtualization enhance your development environment?

Practical uses of virtualization in software development are about as diverse as the techniques to achieve it.
Whether running your favorite editor in a virtual machine, or using a system of containers to host various services, which use cases have proven worth the effort and boosted your productivity, and which ones were a waste of time ?
I'll edit my question to provide a summary of the answers given here.
Also it'd be interesting to read about about the virtualization paradigms employed too, as they have gotten quite numerous over the years.
Edit : I'd be particularly interested in hearing about how people virtualize "services" required during development, over the more obvious system virtualization scenarios mentioned so far, hence the title edit.
Summary of answers :
Development Environment
Allows encapsulation of a particular technology stack, particularly useful for build systems
Testing
Easy switching of OS-specific contexts
Easy mocking of networked workstations in a n-tier application context
We deploy our application into virtual instances at our host (Amazon EC2). It's amazing how easy that makes it to manage our test, QA and production environments.
Version upgrade? Just fire up a few new virtual servers, install the software to be tested/QA'd/used in production, verify the deployment went well, and throw away the old instances.
Need more capacity? Fire up new virtual servers and deploy the software.
Peak usage over? Just dispose of no-longer-needed virtual servers.
Virtualization is used mainly for various server uses where I work:
Web servers - If we create a new non-production environment, the servers for it tend to be virtual ones so there is a virtual dev server, virtual test server, etc.
Version control and QA applications - Quality Center and SVN are run on virtual servers. The SVN box also runs CC.Net for our CI here.
There may be other uses but those seem to be the big ones at the moment.
We're testing the way our application behaves on a new machine after every development iteration, by installing it onto multiple Windows virtual machines and testing the functionality. This way, we can avoid re-installing the operating system and we're able to test more often.
We needed to test the setup of a collaborative network application in which data produced on some of the nodes was shared amongst cooperating nodes on the network in a setup with ~30 machines, which was logistically (and otherwise) prohibitive to deploy and set up. The test runs could be long, up to 48 hours in some cases. It was also tedious to deploy changes based on the results of our tests because we'd have to go around to each workstation and make the appropriate changes, which was a manual and error-prone process involving several tired developers.
One approach we used with some success was to deploy stripped-down virtual machines containing the software to be tested to various people's PCs and run the software in a simulated data-production/sharing mode on those PCs as a background task in the virtual machine. They could continue working on their day-to-day tasks (which largely consisted of producing documentation, writing email, and/or surfing the web, as near as I could tell) while we could make more productive use of the spare CPU cycles without "harming" their PC configuration. Deployment (and re-deployment) of the software was simplified, since we could essentially just update one image and re-use it on all the PCs. This wasn't the entirety of our testing, but it did make that particular aspect a lot easier.
We put the development environments for older versions of the software in virtual machines. This is particularly useful for Delphi development, as not only do we use different units, but different versions of components. Using the VMs makes managing this much easier, and we can be sure that any updated exes or dlls we issue for older versions of our system are built against the right stuff. We don't waste time changing our compiler setups to point at the right shares, or de-installing and re-installing components. That's good for productivity.
It also means we don't have to keep an old dev machine set up and hanging around just-in-case. Dev machines can be re-purposed as test machines, and it's no longer a disaster if a critical old dev machine expires in a cloud of bits.