How to upload new/changed files from development server to the production one? - apache

Recently I started to incorporate good practices in my development workflow, so I split the development server and the production one. I also incorporated a versioning system using Subversion (Tortoise SVN).
Now I have the problem of synchronize the production server (Apache shared hosting) with the files of the last development version in my local machine.
Before I didn't have this problem because I worked directly with the server files through Filezilla. But now I don't know how to transfer the files in an efficient way and what are the good practices in this aspect.
I read something about Ant and Phing but I'm not sure if this appropiate to me or is unnecessary complexity.

Rsync is a cross-platform tool designed to help in situations like this; I've used it for similar purposes on multiple occasions. This DevShed tutorial may be of some help.

I don't think you want to "authomatize" it, rather establish control over your deployment and integration process. I generally like SVN but it has some bugs and one problem I have with it is that it doesn't support baselining -- instead you need to make a physical branch of your repository if you want to have a stable version to promote to higher environments while continuing to advance the trunk.
Anyway, you should look at continuous integration and Jenkins. This is a rather wide topic to which not a specific answer can be given. There are many ins, outs, what-have-yous. Depends on your application platform, components, do you have database changes, are you dealing with external web services or 3rd party APIs etc.

Maybe out there are more structured solutions but with Tortoise SVN you can export only the files changed between versions in a folder tree structure. And then, upload as always in Filezilla.
Take a look to:
http://verysimple.com/2007/09/06/using-tortoisesvn-to-export-only-newmodified-files/
Using TortoiseSVN, right-click on your working folder and select
“Show Log” from the TortoiseSVN menu.
Click the revision that was last published
Ctrl+Click the HEAD revision (or whatever revision you want to
release) so that both the old and the new revisions are
highlighted.
Right-click on either of the highlighted revisions and select
“Compare revisions.” This will open a dialog window that lists all
new/modified files.
Select all files from this list (Ctrl+a) then right-click on the
highlighted files and select “Export selection to…”

Side note:
You have to open more details about your workflow and configuration - applicable solutions depends from it. I see 4 main nodes in game: Workplace, Repo Server, DEV, PROD, some nodes may be united (1+2, 2+3), may have different set of tools (do you have SSH, Rsync, NFS, Subversion clients on DEV|PROD). All details matter
In any case - Subversion repositories have such thing, as hooks, in your case post-commit hook (executed on Repository Server side after each commit) may be used
If this hook (any code, which can be executed in unattended mode) you can define and implement any rules for performing deploy to any target under any conditions. You must only know
Which transport will be used for transferring files
What is your webspaces on servers (Working Copies of just clean unversioned files - both solution have pro and contra sets) - it will define, which deployment-policy ("export" or "update") you have to implement in hook
Some links to scripts, which export files, affected by revision (or range of revisions) into unversioned tree

Related

Wordpress development process [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
I want to design a wordpress development process like in following picture:
First I want to create a bitbucket repository for my Wordpress site. From this repository all our software developers should able to clone the site to their local machines for developing. For developing all developers should have one local database to test changes.
After a developer finished a task he should be able to push his changes to the repo. When a sprint is done I want to send all changes from the repo with Jenkins pipeline/job to the test environment. At this environment a tester should be able to test all new functions with a cloned database from the prod system (including the dev changes).
When all tests are successfully done I want be able to apply the database changes to the prod system (with a SQL script) and send all changes with an other Jekins pipeline/job to the prod system.
Do you think this can work? Whats with plugin updates? Can I setup environment variables for each system so the plugin updates can be just done on the dev machine?
I'm not sure if this could work because a plugin or plugin update creates a lot of new database changes and I think I need a tool who can display all changes like Sourcetree for git.
Is there someone who has expert knowledge with wordpress and this kind of development process and can share his experience with me?
Or do you think this process is not working with wordpress? If this is true it would be realy bad because I need a process like this.
Thanks a lot!
I don't really know Wordpress, but the process you describe is definitely possible (I've implemented similar solutions on Drupal and Adobe Experience Manager, for instance).
However...
It's hard.
In a CMS project, a change/new feature can include:
a code change (PHP, CSS, JavaScript)
a database structure change (e.g. a new table)
a database content change (e.g. a copy fix, or default/test content)
a configuration change
Working out which version should get what is really hard. You want a developer to commit a change, and have that change replicated on QA with test content - but once QA sign it off, you probably don't want to promote that test content to production. And config changes should probably flow between systems but with different values for each environment.
For managing the database changes, I've found a plug-in that monitors database changes; no idea how scriptable that is.
See WP Activity Log.
What I've done in the past in similar situations is write a script that creates the database definition for each change - so a developer can run that script, and commit it as part of their code change. It requires a lot of discipline, though - you can only modify the database structure by using the scripts.
The correct answer is yes you can do this. I know WordPress, Bit-bucket, GIT, SVN, Linux, Ubuntu exceptionally well. I have built a system very similar to what you describe and use it daily.
The problem stated is the CMS can get tricky. That is true, but you need to use the correct tools for the correct upgrades. So, WordPress ALREADY has versioning and revisions built into it. The DATABASE doesn't need to be involved at all
First off. The database doesn't need to be updated unless you are updating plugins. But for strict development no DB pushes are necessary. So have your developers check files in and out of Bit-bucket. When the lead developer approves the changes have him migrate / push to the MASTER BRANCH in your REPO. Inside of bit-bucket there is a tool called GIT HOOKS. You can trigger a php file on the server every time there is a push to the production branch. What the PHP file does is simply trigger the linux command GIT PULL which will update all the code on the server with what in on your PRODUCTION BRANCH. GIT PULL will also remove any files if files were removed etc. On the server you will have a "checked out" copy of the GIT repo and on linux the credentials after the first clone will be stored. Simply have your PHP file trigger a BASH script that does a GIT PULL. Done.
No matter how many developers you have there will always need to be a set of eyes that reviews the code changes and merges those into production. I.e. that is where the Lead Developer comes into play.
FYI. The only directories in your wordpress instance that needs to be in bitbucket is the THEME DIRECTORY and the PLUGINS directory. You DO NOT need to sync the entire WP install which is pretty large.
In the case that you would be building custom Plugins, again, it is just code that is stored in the plugins directory. If your custom plugins are built correctly and require the use of Databases then when they are activated they will immediately build the WP DB's that are needed. Likewise, correctly built plugin will also drop its own custom table when uninstalled.
You will need to sync the 2 below directories.
Plugins folder resides in: wp-content/plugins/
Themes Folder is wp-content/themes/SELECTED_THEME
Any additional questions just ask and I am here.
From my experience it is always better to allow each developer to have their own Branch and to setup the the Dev server a dedicated master branch for quality control. you can check out some documentation on how to set this up https://plixxer.com/docs/server-management/website-quality-control/
basically you want to have a live server and dev server. The live server should only ever pull from the REPO and and the Dev and coders can pull or Push from the repo. My team treats the dev server as a quality checking station. If the current live code is not up to our standards the entire dev is rolled back to what is live on the master branch. When code in the master succeeds our standards we pull from the master branch onto the live server. Each developer should have their own branch for testing on their local server. Let me know if you need some help on setting up a local environment with GIT.
You will want to make a distinction around "build" and another around "release". The workflow I understand is that developers call their local workstations "dev", and pull request their work to the develop branch (you may have already read through Gitflow). Then, using your choice of CI automation, you get the latest source into a build area and do that - build it. Check out Ansible. If you have BitBucket, maybe you also want to organize your sprint with the likes of Jira? Then you have pretty seemless integration of your sprint objectives with actual branches containing the relative work/source. Ansible can help you automate builds and releases to the point where you are doing daily builds, and running the unit tests across your builds in the various integration environments.
During builds, you would have different configuration files being factored in depending on the target environment. This is how to care for environment configuration. It is part of the build process, and ideally all configuration is possible through the build. For example, a connection string might be different across the environment if you are having different databases to isolate migration of schema changes. For example, in a Angular application you would execute ng b --prod to build production and this would bring in a production configuration file during build to change the connection string (for example).
More about configuration specific to environments... you can also include post deployment scripts that get deployed and executed after files are uploaded so that they will configure the environment as required.
Ask your questions below, and I will do my best to build this out into a comprehensive guide.

Archivable, replicable releases when building with Maven: is there a right way?

We have a largish standalone (i.e. not Java EE) commercial Java project (10,000+ classes, four or five SVN repositories, ten or twenty third-party libraries) that's in the process of switching over to Maven. Unfortunately only one engineer (in a team of a dozen or so distributed across three countries) has any prior Maven experience, so we're kind of figuring it out as we go.
In the old Ant way of doing things, we'd:
check out source code from three or four repositories
compile it all into a single monolithic JAR
release that (as part of a ZIP file with library JARs, an installer, various config files, etc.)
check the JAR into SVN so we had a record of what the customers had actually got.
Now, we've got a Maven repository full of artifacts, and a build process that depends on Maven having access to that repository. So if we need to replicate what we actually shipped to a customer, we need to do a build against a Maven repository that has all the proper versions of everything. This is doable, I guess, if in (some version of) the (SVN-controlled) POM files we set all the dependencies to released versions?
But it gives our release engineer the creepy-crawlies, because there doesn't seem to be any way:
to make sure that somebody doesn't clobber the copy of foo-api-1.2.3.jar on the WebDAV server by mistake (the WebDAV server has access control, but that wouldn't stop a buggy build script)
to detect it if they did
to recover afterwards
His idea is, for release builds, to use a local file system as the repository rather than the WebDAV server, and put that local repository under SVN control.
Our one Maven-experienced engineer doesn't like that -- I guess because he doesn't like putting binaries under version control? -- and suggests that maybe the professional version of the Nexus server can solve the clobbering or clobber-tracking/recovery problem.
Personally, I'm not happy (sorry, Sonatype readers) with shelling out money for a non-free build system when we haven't even seen any benefit from the free version yet, and there's no guarantee it will actually solve the problem.
So our choices seem to be:
WebDAV server
Pros: only one server, also accessible by devs, ...?
Cons: easy clobbering, no clobber-tracking/recovery
Local file system
Pros: can be placed under revision control
Cons: only works with the distribution script
Frankly, both of these seem like hacks to me, and I have to wonder if there isn't a better way to do this.
So: Is there a right thing to do here?
I'm not sure to get everything but I would:
Use the maven-release-plugin (which automates the release process i.e. execute all the steps documented in release:prepare).
Use WebDAV with anonymous read-only and authenticated write policy (so only release engineer can actually deploy released artifacts to the corporate repo).
There is a no need to put generated artifacts under version control (if you have the poms under version control). I don't see the benefits of using the local file system instead of WebDAV (this is not providing more security, you can secure WebDAV as well). I don't see what the commercial version of Nexus would solve here.
Nexus has a setting which prevents you from clobbering an already released artefact in a release repository.
For a team of about a dozen, the free version of Nexus should be enough.

How to migrate WebSphere app with no WAR/EAR file

I am to migrate a Websphere machine (including the applications which run on it) to a new machine. They wanted a clean install of the OS and WebSphere, so I did that. I also took a full file backup of all of the applications they had on the old server. The problem is that to re-install them on the new server, the WebSphere dialog asks me for the JAR/EAR/WAR file, which I don't have.
Is there any reasonably easy way to simply extract the backup of the WebSphere application files I have taken from the old maching, and simply configure the new machine to use them? WAR, etc. is a nice feature to have, but to be forced to use it seems silly.
Edit: The existing WebSphere server is still up and running in production.
Edit: The old server is WAS 3.5, which means it doesn't even have an export function, sadly. Also, the directory where it actually runs the content from has a completely different structure (consisting of like a a %/Web and %/Servlet, where % is the context path of the application). In the "Install" section, it doesn't even mention EAR or WAR, only JAR. I am currently thinking that perhaps the best thing to do might be to just copy the directory over to another WAS 3.5 system and then upgrade that system (and hope it converts the folder structure and updated the config as part of the upgrade).
Edit: The closest thing I have found to a solution so far is this link:
http://www.javazoom.net/services/newsletter/was4.html (though I am not sure if that tool is available or relevant for WAS 7.x).
This has to be a problem other people have run into before, but I can't find a solution anywhere on the WEB.
Thank you!
Here do they have sample Jacl scripts one can use to export/import appserver's configuration. So that is what you can start with. If your new bow uses the same version of WAS (and the same topology if it is not a standalone box) as the old one, it might be a (relatively) safe process.
Migration between different versions of Websphere might be somewhat more tricky, but I'm sure IBM published at least one redbook on that topic.
If you still have the old server running, than just export the apps and you have the war/ear files. However, If you don't know the configuration for the apps, you are screwed. However, I am sure IBM has tools that you can use. Some of the paid tools look even nice and user friendly (at least according to their sales demos). I can't tell you what you need, since I don't know what documentation you have for your apps. But as it looks like there is not much there, otherwise you would just install the application the same way they were installed on your old server and use the binaries (war, ear, jar) that are archived somewhere.

How do you distribute the IDE and it's configuration within your Team?

I'm wondering how Software Development Team distribute their Standard IDE(s)?
E.g. developing with Eclipse, custom Code formatter, svn Resository, Copyright Header..
At the moment my Team has a standard zip File which is then distributed withhin the developers.
Problem:
If one file, a Plugin or the IDE itself changes, e.g. new Coding Guidlines, Upgrade Eclipse 3.5.1 the whole distribution has to be done again. Every developer needs to unzip the bundel again. Imagine your working with different Workspaces (Jetty, different Tomcamt Versions, WTP) due to Project History That doesn't scale
I know that there are some related Articels
A new version of Eclipse just came out. Is there anything I can do to avoid having to manually hunt down my plugins again?
Manage Your Eclipse Install With A Local Git Repository
And some comercial Programs.
Eclipse also has a new Update-Installer Approach
But I don't see the Killer App. How do your team solve this? Is there a best practice?
I guess best would be a Program letting you choose your current Project and then downloads the configured IDE from the Server and leting you know if Project Config Files are Updated
For eclipse look at Buckminster it targets exactly your target I suppose, didn't use it personally through.
At my previous company they wrote a custom update agent that pulled from a centrally configured server which was updated by the team leaders. It worked well, until people wanted to install their own plugins.
Basically, a developer wanted a plugin, fought in futility to get it included in the default (managed) repo, installed it himself, then updates broke on his machine when the team lead had a sudden stroke of common sense and included it.
They never did come up with a 'good' way to manage it. But, at least they didn't put us all on terminal servers with thin clients.

How to automate development environment setup? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Every time a new developer joins the team or the computer a developer is using changes, the developer needs to do lots of work to setup the local development environment to make the current project work. As a SCRUM team we are trying to automate everything including deployment and tests so what I am asking is: is there a tool or a practice to make local development environment setup automated?
For example to setup my environment, first I had to install eclipse, then SVN, Apache, Tomcat, MySQL, PHP. After that I populated the DB and I had to do minor changes in the various configuration files etc... Is there a way to reduce this labor to one-click?
There are several options, and sometimes a combination of these is useful:
automated installation
disk imaging
virtualization
source code control
Details on the various options:
Automated Installation Tools for automating installation and configuration of a workstation's various services, tools and config files:
Puppet has a learning curve but is powerful. You define classes of machines (development box, web server, etc.) and it then does what is necessary to install, configure, and keep the box in the proper state. You asked for one-click, but Puppet by default is zero-click, as it checks your machine periodically to make sure it is still configured as desired. It will detect when a file or mode has been changed, and fix the problem. I currently use this to maintain a handful of RedHat Linux boxes, though it's capable of handling thousands. (Does not support Windows as of 2009-05-08).
Cfengine is another one. I've seen this used successfully at a shop with 70 engineers using RedHat Linux. Its limitations were part of the reason for Puppet.
SmartFrog is another tool for configuring hosts. It does support Windows.
Shell scripts. RightScale has examples of how to configure an Amazon EC2 image using shell scripts.
Install packages. On a Unix box it's possible to do this entirely with packages, and on Windows msi may be an option. For example, RubyWorks provides you with a full Ruby on Rails stack, all by installing one package that in turn installs other packages via dependencies.
Disk Images Then of course there are also disk imaging tools for storing an image of a configured host such that it can be restored to another host. As with virtualization, this is especially nice for test boxes, since it's easy to restore things to a clean slate. Keeping things continuously up-to-date is still an issue--is it worth making new images just to propagate a configuration file change?
Virtualization is another option, for example making copies of a Xen, VirtualPC, or VMWare image to create new hosts. This is especially useful with test boxes, as no matter what mess a test creates, you can easily restore to a clean, known state. As with disk imaging tools, keeping hosts up-to-date requires more manual steps and vigilance than if an automated install/config tool is used.
Source Code Control Once you've got the necessary tools installed/configured, then doing builds should be a matter of checking out what's needed from a source code repository and building it.
Currently I use a combination of the above to automate the process as follows:
Start with a barebones OS install on a VMWare guest
Run a shell script to install Puppet and retrieve its configs from source code control
Puppet to install tools/components/configs
Check out files from source code control to build and deploy our web application
I stumbled across this question and was very suprised that no one has mentioned Vagrant yet.
As Pete TerMaat and others have mentioned, virtualization is a great way to manage and automate development environments. Vagrant basically takes the pain away from setting up these virtual boxes.
Within minutes you can have a completely fresh copy of your favourite Linux distro up and running, and provisioned exactly the same way your production server is.
No more fighting with OSX or Windows to get PHP, MySQL, etc. installed. All software lives and runs inside the virtual machine. You can even SSH in with vagrant ssh. If you make a mistake or break something, just vagrant destroy it, and vagrant up to start over fresh.
Vagrant automatically creates a synced folder to your local file system, meaning you don't need to develop within the virtual machine (ie. using Vim). Use whatever your editor of choice is.
I now create a new "Vagrant box" for almost every project I do. All my settings are saved into the project repository, so it's easy to bring on another team member. They simply have to pull the repo, and run vagrant up, and they are literally ready to go.
This also makes it much easier to handle projects that have different software requirements. Maybe you have some projects that rely on PHP 5.3, but some newer ones that run PHP 5.4. Just install the version you want for that project.
Check it out!
One important point is to set up your projects in source control such that you can immediately build, deploy and run after checkout.
That means you should also checkin helper infrastructure, such as Makefiles, ant buildfiles etc., and settings for the tools, such as IDE project files.
That should take care of the setup hassle for individual projects.
For the basic machine setup, you could use a standard image. Another option is to use your platform's tools to automate installation. Under Linux, you could create a meta-package that depends on all the packages you need. Under Windows, a similar thing should be possible using MSI or the like.
Edit:
Ideally, instead of checking in helper infrastructure, you check in the information that allows the build to generate the helper infrastructure. This is the approach taken by e.g. the GNU build system (autotools etc.), or by Maven. This is even more elegant, because you can (theoretically) generate infrastructure for any (supported) build environment, thus you are not bound to e.g. one specific IDE, and settings in the helper infrastructure (paths etc.) don't need to duplicate the main project settings.
However, this also a more complex approach, so if you can't get it to work, I believe checking in stuff like IDE files directly is acceptable.
I like to use Virtual PC or VMware to virtualize the development environment. This provides a standard "dev environment" that could be shared among developers. You don't have to worry about software that the user could add to their system that may conflict with your development environment. It also provides me a way to work to two projects where the development environments can't both be on one system (using two different versions of a core technology).
Use puppet to configure both your development and production environment. Using a top-notch automation system is the only way to scale your ops.
There's always the option of using virtual machines (see e.g. VMWare Player). Create one environment and copy it over for each new employee with minimal configuration needed.
At a prior place we had everything (and I mean EVERYTHING) in SCM (clearcase then SVN). When a new developer can in they installed ClearCase|SVN and sucked down the repository. This also handles the case when you need to update a particular lib/tool as you can just have the dev teams update their environment.
We used two repo's for this so code and tools/config lived in separate places.
I highly recommend Blueprint from DevStructure. It's open-source and your use case is actually the exact reason we originally wrote the software. Our goals have somewhat changed, but it still is the perfect tool for what you are describing. In short, you can create reusable server configs - dead simple configuration management. I hope this helps!
https://github.com/devstructure/blueprint (Blueprint # Github)
I've been thinking about this myself. There are some other technologies that you could throw into the mix. Here's what I'm currently setting up:
PXE based pre-seeded installation images (Debian Squeeze). You can start up a bare-metal machine (or new virtual appliance) and select the image from the PXE boot menu. This has the major advantage of being able to install your environment on physical machines (in addition to virtual appliances).
Someone already mentioned Puppet. I use CFEngine but it's a similar deal. Essentially your configuration is documented and centralized in policy files which are continually enforced by an agent on the client.
if you don't want a rigid environment (i.e. developers may choose a combination of tool-sets) you can roll your own deb packages so new devs can type sudo apt-get install acmecorp-eclipse-env or sudo apt-get install acmecorp-intellij-env, for example.
Slightly off-topic, but if you run a Debian based environment (i.e. Ubuntu), consider installing apt-cacher (package proxy). In addition to saving bandwidth, it will make your installations much faster (since packages are cached on your local network).
If you're using OSX and working with Rails. I'd suggest either:
https://github.com/platform45/let-there-be-light
https://github.com/thoughtbot/laptop
If you use machines in a standard configuration, you can image the disk with a fresh perfectly configured install -- that's a very popular approach in many corporations (and not just for developers, either). If you need separately configured OS's, you can tar-bz2 all the added and changed files once a configured OS is turned into your desired setup, and just untar it as root to make your desired environment from scratch.
if you're using a linux flavor, you've probably got a package management system: thinks .rpm for fedora/redhat, or .deb for ubuntu/debian. many of the things you describe already have packages available: svn, eclipse, etc. you could roll your own packages for company specific software, create a repository (perhaps only available on the local network) and then your setup could be reduced to a single bash script which would add the company repo to /etc/apt/sources.list (debian/ubuntu) and then call a command like,
/home/newhire$ apt-get update && apt-get install some complete package list
you could use buildbot to then automate regular builds for company packages that change often.
Try out DevScript at http://nsnihalsahu.github.io/devscript .
Its one command like ,
devscript lamp or devscript laravel or devscript django . In around a few minutes ,depending on the speed of your internet co