Honestly, I don't really know how to begin.
I have a live site in a VPS. My development flow is usually making changes on my local machine, then pushes to live via capistrano. I use git, but I don't really know the setup (as it was done by a friend). So I am not sure my git repo is local, or in the server.
Now I wanna do something more manageable. I want to use Redmine to track my development. Having said this, I would like to host my repo in the same server as my live server. This can give easy access to the other remote developers. Is it a good idea to host this repo in a same server?
Also, in future, I will need to have a unit test and functional test server. I reckon this should be separated from the live server right?
What is a good arrangement? Of course I am pretty tight with budget, and I don't wanna buy my own physical server.
Thanks.
I've hosted dev and live versions in the same VPS, e.g. dev.site.com. I usually make it basic auth to give at least a little privacy into its development. But if your source repository is also on the same VPS, then you need to have a standard backup process that gives you an off-VPS copy. You definitely don't want all your eggs in one basket.
For multi-developer use, you just need a repo that everyone can access. The dev instance is better split to the developer machines, plus a regular build cycle for dev version that includes everyone's changes. That would be your test/QA server. Unit tests can just be a local version.
Does that help? Not sure if this answer is as technical as you want.
Related
We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.
I am very very new to Docker. Our team has had a very nice deployment line up where We have different CI engines for different projects including Jenkis and TeamCity.
Developers usually check-in and CI takes over, deploys and its perfectly ready for test team to test. I always thought this to be a perfect model. Of course, some parts and our implementation have their flaws but it worked very well for what we wanted.
Now, our Dev-Ops is introducing Docker where test teams get a Docker Image from Docker Registry Everytime we run a build from teamcity. While it sounds really really fancy I am still failing to understand the benefit of it.
After my research, my conclusion was that Dockers can be a good light weight replacements for VM. BUT that is ONLY IF you are using any VMs? We are not using any VMS? I just do not understand what is the real value here? Also, while searching I found a relatively good link on Docker:
https://www.ctl.io/developers/blog/post/what-is-docker-and-when-to-use-it/
Where they discuss when you should use Docker and one of the point says that:
Use Docker whenever your app needs to go through multiple phases of development (dev/test/qa/prod, try Drone or Shippable, both do Docker CI/CD)
Ok. Howeve rthey do not further elaborate on why is docker useful when my app has to go through multiple phases?
And how it is exptremely helpful over regular Dev/Test set up when the existing set up is already working smooth?
First, you are right about comparing it to VMs in that it is similar to a VM. However, docker is incredibly lightweight. This property is the one that surprised me most in the beginning. As opposed to virtual machines, containers share resources much more efficiently. Virtual machines are isolated. Containers can run simultaneously on a host machine with very little overhead. You can configure containers to be able to talk to each other (via volume or port bindings).
Furthermore, in my team, docker brings the following benefits:
our application consists of one big application and several other few microservices. But we want to release all as one package with inter-dependencies among the applications, which eliminates problems with figuring out which version of application and microservices should be deployed together (compatiblity) etc. That is, the image contains all you need and you can bring all applications or one-by-one up/down using docker-compose. You do not need to deploy, you simply pull the image and fire a container/s. If you wish to stop one of the microservices, it can be done without affecting the others.
developers in the team, can run the very same image on local machine, for example to troubleshoot a problem occurred in the production; which means troubleshooting can be done in the same environment as in the production. This brings environment standardization and no more "but it works on machine" talk.
another benefit it brings to us is the following: we build a docker image, run our tests against it, and push it to the registry once all these phases succeed, which translates into a great portability.
Ability to version control the containers. You can easily inspect containers between the current version and the previous versions. If you wish to rollback - that is done smoothly.
Isolating and securing applications. All containers are isolated and you can easily control what goes in and out.
It took me a year before I got used to the idea, but now it seems simple enough.
I think part of that comes from the fact that people keep calling Docker a "virtual machine", which is not accurate. That's really just a nickname for what's happening behind the scenes. In a lot of ways, Docker will NOT replace a complete virtualization solution, such as VMWare. It does, however, bring forth a new way of thinking about infrastructure. One that many people have a difficult time wrapping their heads around.
You can start asking yourself: What makes a Linux distribution unique?
Aside from the kernel, everything else is just a "standard way" of organizing binaries, libraries, runtime and configuration files. You need your binaries in /bin, your libs in /lib, your configuration in /etc. User installations get placed under /usr...
Most distributions will keep the main structure from the Unix legacy and add its own quirks. Each one will have its own way to manage and distribute packages. Each will maintain their own versions of libraries, drivers, etc.
The key ingrident is the kernel. That's something they all have in common. Nowadays, recent builds of the Linux kernel are compatible with pretty much all major distributions available. So, aside from /boot, most of everything else is just a matter of having the right files in the right place with the right permissions.
Now, imagine you take all that distribution bundle (except the kernel) and place it all in another directory of your running OS. Taking advantage of the same kernel you are already running, you isolate a new process so that it "thinks" that / is now that directory. Bingo! This process now "thinks" it's running all by itself on another operating system.
Docker builds on top of Linux Containers, which allows us to do excatly that, but in a more friendly and easier way. Don't think of it as a virtual machine. Think of it as process isolation. The running kernel will share the machine's resources with this process, while keeping it isolated from the rest of the system. It's like jails on steroids.
That was a broad simplification. But, given the concept, think about the implications of this idea.
You can have on the same host, multiple processes with completely different environments that might otherwise conflict with each other. One may be a legacy binary that needs old libraries in place (legacy systems that never die). Another may be the most recent build of a bleeding edge technology. Sharing the same kernel is a efficient, and valuable resource management.
The most value I found comes from managing the infrastructure. Once you install Docker on the hosts, configure a swarm, and define a way of deploying containers, you mostly forget about the hosts. Adding users, installing packages, customizing, editing configuration files... All that becomes a development task on your desktop. There's an incentive to script more, to automate more. To keep your hands away from the physical or virtual machines, unless absolutely necessary.
Gone are the days when someone changed some obscure setting on the server to work around some weird application behavior, forgot to tell anyone about it and took a vacation. Changes to the environment can be commited to version control, tracked and improved by everyone on the team. If your datacenter goes through a disaster, recreating the whole environment is a matter of rebuilding images and redeploying containers. Your infrastructure becomes consistent and reproducible, while keeping the doors open to a wide variety of operating systems and customized configurations for each application.
Developers can take advantage of Docker with the ability of recreating dev/staging/production environments on their desktops. No need to polute a dev machine with application servers and database installations, or even the toll of Virtual Box to emulate all that.
Testing can be automated with a higher level of isolation. The Selenium team already has official Docker images. Creating an entire test hub should be a walk in the park with those puppies.
Building custom software, such as compiling Nginx with third party modules, can also be done inside containers from specialized images. No need to keep an entire server dedicated to it, or even polute your desktop with all the dependencies and build packages.
Overall, we've been having a great experience with Docker. We've migrated our staging environment to this new platform, and plan to migrate other parts of the infrastructure as well, eventually into production. So far, so good.
I hope you can convince enough people to take a better look at it. I'll admit, it took me sometime to get used to the idea. But once you get it, it's actually worth it.
I'm planning to host a Rails application on Linode, but I'm still unsure about the requirements and process of deploying. I'm only getting the 512 plan since I'm expecting relative small traffic for the site.
My question is, do I need to get a repository such as Github to store my code? I'm also a bit concerned about how long it takes to set the server up and the deployment process. I've browsed through the Linode library but I'm not entirely clear on how to deploy Rails apps. I'm planning to use nginx as my server and passenger for deploying. Does anyone know where I can learn to deploy Rails applications on a Linode machine? A step-by-step tutorial with detailed explanation would be great. Thanks!
I've deployed a couple of simple applications on Linode and found their documentation to be excellent. In particular they have step-by-step tutorials tailored to specific environments. For example, in my case (like you) I wanted to use nginx, and I was using Ubuntu 10.04, so I followed this guide:
http://library.linode.com/frameworks/ruby-on-rails-nginx/ubuntu-10.04-lucid
If it's your first time setting up on a VPS there will be some hurdles certainly, but I found the experience to be very rewarding.
Regarding hosting your code, you have a number of options, but keep in mind that this is really a separate issue from deploying your app. You deploy your app on linode, but you don't have to host your code there, although you certainly can.
In general terms, if you're okay with making your code open, then certainly github is a good choice. If you want to keep the code private but still have access online (rather than just on one computer), you can take advantage of your linode machine and host your code there.
If you will have a number of other people contributing to the codebase, you might consider setting up gitosis or gitolite, which make it easy to do this. Alternatively if you will be the main user contributing to the codebase, you can setup a simpler configuration through HTTP, explained here: http://dev.bazingaweb.fr/2011/02/23/how-to-set-up-git-over-http.html
Linode also has documentation on setting up a remote git repository: https://library.linode.com/linux-tools/version-control/git
If you're choosing between gitosis and gitolite, I'd go with gitolite since gitosis appears to have been abandoned and is no longer being actively maintained.
Other references on deploying on linode:
http://infinite-sushi.com/2011/01/deploying-a-rails-app-to-a-linode-box/
http://blog.chris-spencer.co.uk/from-zero-to-git-deployment-on-linode
Ryan Bates has a great videocast on deploying Rails apps to... Linode! Today's your lucky day :) Grab some popcorn and enjoy: http://railscasts.com/episodes/335-deploying-to-a-vps
You don't need a GitHub account to deploy on Linode. The deploy process happens between your local machine and the Linode servers, usually by means of the Capistrano gem.
This tutorial from Smashing Magazine is pretty good. http://coding.smashingmagazine.com/2011/06/28/setup-a-ubuntu-vps-for-hosting-ruby-on-rails-applications-2/
Perfect Script for installation of nginx/ PostgreSQL/ Postfix/ Node.js/ Add deployer user/ rbenv
also refere this link https://medrails.wordpress.com/?blogsub=confirming#subscribe-blog
Thanks
I'm looking for a good way to push code quickly and securely to my company's Windows web servers for release deployments.
I have a *nix background and in the past have always used rsync in conjunction with ssh for such tasks because it is quick, secure, and scriptable.
Right now our deployment process is very manual and requires logging into each server over remote desktop and using TortoiseHg to pull code from our main repo into the server (obviously this requires the webserver to have credentials into the central Hg repo). Needless to say, this process is very human, and accordingly error prone, not to mention tedious and slow. We also have several servers that we use internally for dev staging, QA team, etc.
What I would like to know is
1) Is there a straightforward way to do this either with rsync & ssh (and cygwin or powershell).
2) What is the most accepted way to script pushing code to Windows boxes??
Thanks,
Jamie
Check out Jon Tørresdal's blog series on No-Click Web Deployment part 1 and part 2.
I'm currently setting up a new build server and I'm interested in any suggestions the community may have about software such as Hudson or CruiseControl.NET that may simplify and add additional value to the build process.
Previously I had a build server set up using custom batch files which would run msbuild and other such tools and these were triggered by subversion hooks to allow for a continuous builds to be done per branch. The idea was that eventually we would also execute automated tests and/or static analysis although we never really got that far. This server also acted as our source code repository, a test machine for web project builds, and a web server for custom dashboard and portal for developers on the team.
At this point my thoughts are to separate some of the responsibilities of the old build server and at least a Build Server which is responsible only for creating builds, a web server which is responsible for acting as the intranet style dashboard site for developers, and perhaps an additional web server as the Subversion repository. If it turns out to be better or easier to keep the Subversion code on the same server as SvnServe then I'll probably opt to place the Subversion repository on the web server but still keep the build server separate. Having no personal experience with any of the popular build server and CI solutions out there I'm curious how CruiseControl.NET, Hudson or other solutions would fit into this type of configuration. It appears that both of CC.NET and Hudson have web interfaces for example but the documentation doesn't clearly layout how this plays out with different hardware/system configurations so I'm not sure if either requires the web portion to be on the build server itself or not.
As far as technologies I'm dealing with .NET/C# based code which is a mix of Web/WinForms/WPF and we use a few separate Subversion repositories to host these projects. Additionally it would be nice to support Visual FoxPro and Visual Source Safe for some legacy applications. I would also like to get more team members involved in monitoring builds and would like to eventual have developers create build setups for their own projects as well with as much simplicity as possible. Also I should mention that I have no experience setting up a Java based web application in IIS but I do have quite a bit of experience setting up and managing ASP.NET applications so if that may make .NET based products more favorable unless I can be convinced otherwise.
UPDATE (after researching Hudson): After all the recommendations for Hudson I started looking into what is involved to get it up and running on my two Windows 2008 servers. From what I can gather the web portion (master) would run on my webserver but it seems that IIS isn't supported so this would greatly complicate things since I want to host it on the same machine as my other web applications. On the build server, I would be installing a second copy of Hudson that would act as a slave and only perform builds that are delegated to it by the master. To get this to work I would be installing Hudson as a Windows Service and would also need to install some unix compatibility utilities. Unfortunately the UnxUtils download link appears to be broken when I checked as well so I can't really move forward until I get that resolved. All of this is really sounding just as complex if not more complex than installing CruseControl.NET. For now this unfortunately leaves me to looking into CruiseControl.NET and TeamCity.
UPDATE (about TeamCity): After looking into TeamCity a little closer I realized that at least the server portion is also written in Java and is deployed in a manner very similar to Hudson. Fortunately it appears that Tomcat can be used to host servlets inside IIS although I can't find a good straight forward guide to describe how to actually do accomplish this. So skipping that for now I looked further when I ran into what looks like what might be a major snag.
TeamCity Professional edition only
supports TeamCity Default
Authentication and does not support
changing the authentication scheme.
Since windows authentication is likely the direction we will want to go, it's now looking like it might be back to evaluating CruiseControl.NET or possibly Hudson if I can get my hands on the UnxUtils and also find out more about how I can host the dashboard portion of Hudson within my existing IIS configuration. Any pointers?
UPDATE (about Jenkins): I ended up experimenting enough with Hudson that I ended up with a reasonable build server setup that I'm happy with and that can be extended to do much more if I need. Of course I went the rout of converting to Jenkins once Oracle took over Hudson and Jenkins is what I'm using today with little bits of powershell to help tie things together. I'm very happy with this approach right now and besides being Java based, Jenkins has quite a bit of support for other development environments such as .NET and MSBuild.
I'd vote for TeamCity here. Its is very, very easy to get stood up and running, integrates with all your .NET stuff without any trouble. The builds themselves are run by agents which can be on the build server or another machine depending on requirements--they could even be on a machine running an entirely different OS on a different network in a different country.
I highly recommend using Hudson. Not only will it allow you to build .NET applications on a continual basis, but you can also run code analysis and unit tests as well. It's easy to install (just deploy a WAR file to a web server such as Tomcat) and has many configuration options. There is also a large number of plugins available that you can use, many written by other Hudson users. Best of all, it is free and actively supported.
For our decision making process we started with following overview.
http://confluence.public.thoughtworks.org/display/CC/CI+Feature+Matrix
Our main objective was java, easy to configure/use even after nobody created a job for 6 months. We moved away from a old version of Cruise Control, since nobody really knew how to use it. Some of the commercial products are nice if you want to go beyond just continuous integration. Have a look and decide for yourself.
Be careful, I don't know how up to date this matrix is. So some of the projects might have implemented more functions right now.
An interesting alternative could be Jira studio by Atlasian. If you use the hosted version you don't have much on support issues and it comes with subversion, bamboo, and goodies (jira+greenhopper, confluence, crucible, fisheye). http://www.atlassian.com/hosted/studio/
I agree with Wyatt Barnett. TeamCity is the best choice. It is very easy to configure and use. Moreover, TeamCity has a Free Professional Edition. Previously we used CruiseControl.NET on our project. This is also a powerful tool, but it is very complicated and hard to understand.
What s.ermakovich said: Both TeamCity and Hudson separate the web UI from build agents. You shouldn't need to install IIS on a build agent. You'd need to install a JVM and the agent software on any build node - very straightforward.