I'm looking for a good way to push code quickly and securely to my company's Windows web servers for release deployments.
I have a *nix background and in the past have always used rsync in conjunction with ssh for such tasks because it is quick, secure, and scriptable.
Right now our deployment process is very manual and requires logging into each server over remote desktop and using TortoiseHg to pull code from our main repo into the server (obviously this requires the webserver to have credentials into the central Hg repo). Needless to say, this process is very human, and accordingly error prone, not to mention tedious and slow. We also have several servers that we use internally for dev staging, QA team, etc.
What I would like to know is
1) Is there a straightforward way to do this either with rsync & ssh (and cygwin or powershell).
2) What is the most accepted way to script pushing code to Windows boxes??
Thanks,
Jamie
Check out Jon Tørresdal's blog series on No-Click Web Deployment part 1 and part 2.
Related
We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.
Possible ways to accomplish it:
Creating dedicated WCF service for this purpose (currently my favorite option)
Using the REST API?
Azure PowerShell?
Explanation:
Publishing a web-role cloud-service takes about 10 minutes. It's much too long during development - I try to do as much as I can offline, unit-test-ish and modular, but it's just impossible to completely avoid development cycles altogether with the VM.
Apparently, the long time is mostly a result of the machine being wholly restarted, so I'm trying to find an automatic solution, like uploading and installing the binaries.
What is the best way to accomplish it?
What do you think? would it cut at least 50% of the publishing time?
Do you expect any critical problems?
The solutions proposed below are definitely against best practices and should NEVER-EVER be used in production environment.
If your objective is to quickly test your changes in your development environment, there are two ways you can go about it.
Enable RDP and copy your modified binaries or other files directly in the appropriate folders on the VM. You could enable Remote Desktop on your web role and copy the files manually in appropriate folders.
Use Web Deploy: This will only work for web roles in your project but you could enable Web Deploy on your Web Roles and use that to make faster deployment. Please see this link for more details on how to use this feature: https://msdn.microsoft.com/en-us/library/azure/ff683672.aspx.
What is the best way to manage code between VMs and a central SVN repository?
To be more specific, I have a desktop with a linux VM environment, as well as a laptop with a linux VM environment. Both are running under VMWare workstation. I switch back and forth between desktop and laptop all the time, but have trouble keeping the desktop and laptop in sync.
The most obvious--yet probably least efficient--choice is to just commit everything before I switch machines. However, this leads to committing code that is partially complete, just so I can work on a different machine.
I've considered using something like rsync to keep my two development environments in sync. I think this would be better because then I can still commit changes to svn when I want to, while keeping both desktop and laptop in sync.
So while I'm tempted to go the rsync route, I'm still concerned that I have to proactively sync things. In my case, I'm picturing a scenario where I'm working on something on my desktop, then leave to go to a coffee shop to do work with my laptop, only to realize that I didn't sync before leaving the house (DOH!).
I don't know if there's really any way around this. Maybe I could rsync everything to a centralized server that's always online? And set up cron jobs to run every few mins or whatever to sync with my various development environments?
Is there a better option?
You could consider using distributed version control instead. If you don't have the ability to change the central server, there are still wrappers like git-svn that allow you to use git on your end, while interacting with a Subversion server.
The workflow in a DVCS setup:
Make changes on machine #1, committing locally, repeat.
At switch time, commit locally.
Pull or push changesets from machine #1 to machine #2
Continue work on machine #2.
At switch time, commit locally.
Pull or push changesets from machine #2 to machine #1
Repeat
When it's time to actually push to the server, whichever computer you're on should have the latest code and you can push up to the master server (SVN or whatever).
This does make you commit intermediate changes - but I've found that to be more of a benefit of using a DVCS than a burden.
An alternative to this might be to keep your whole dev directory in a Dropbox folder or some equivalent. Then you don't have to deal with rsync or anything yourself, but you have less control over syncing.
syncd may be what you are looking for: https://github.com/drunomics/syncd. It uses inotify and rsync to listen for file system changes rsync changed files to a remote server.
It is a one way sync though, so you will have to stop it when you stop working on one machine and start it on the other. You will also need to have ssh server running on both machines.
We have an issue that I can't wrap my head around regarding possible solutions.
We have a site that runs off of a Dot Net Nuke CMS, with a custom asp.net CMS powering a reviews engine aspect of it too. This is hosted on a Windows server setup on SQL servers and has its own user registry.
We are looking at a script for an add on revenue offering, and the best of breed we have found happens to be Linux-based using MySQL servers. There are some other options, but none are nearly as robust as the Linux based one.
Our quandry is two-fold:
1) If we use this script, we will need to host a linux server with a different host service (ours only does windows servers). Both server sets will point to the same domain (www.mydomain.com) and have communication between the MySQL DB on the Linux machine and the SQL DB on the Windows machine.
Is this possible...and problematic? Or is this a fairly straightforward issue to solve?
2) The larger issue if the first is a hurdle that can be cleared is we would want to share our user registry between the two databases, so the user would not be logging into each DB when going between the two environments.
This issue is more complex than my understanding of authentication and databases so I'm hoping someone can help me out or at least start me in a good direction for research.
We could go with the other script routes, but they simply don't offer the functions or features of the more difficult to implement code.
OK, you can always run MySQL on the Windows box and install cgywin to run the script in a more unix type environment. Or run xampp on a different port: http://www.apachefriends.org/en/xampp.html
I've got a Windows Server box running AD, and a CentOS box running OpenLDAP in a mixed windows Linux network and I want to keep the two in sync. Preferably using free software/just some configuration changes. anyone know how to make these 2 authentication systems play nice? any syncing would have to be done over SSL for security reasons.
I use a home-grown perl script, which sync one-way from AD to LDAP via SSL. It is very custom and very rigid. I walked the same path 6 months back looking for tools to sync but none fits our needs. Well actually there isn't any that does sync without breaking
So my answer is get a scripting guy and give him the requirements and a months paycheck. Seriously, it is best done in-house than spend time looking for one and molding to your needs.
Perl has good libraries and has worked very well for us. We migrated from OpenLDAP to 389-DS which already has windowsSync plugin.(Hope that tempts you to switchover). :)