As in the topic.
I wonder since I cannot find this information anywhere and currently I am using a virtual machine (linux) on my vcenter which is cloned and then a special shell script is run on this freshly cloned machine to setup up environment and IP adresses etc.
Maybe I would be able to benefit from templates this way.
I think this will be helpful
https://www.robertparten.com/virtualization/vmware-difference-between-clone-and-template/
Few Differences in my opinion:-
Virtual machine is the running instance while Template is compact copy of VM ( with baseline and factory settings), which can be stored anywhere.
one need to deploy template to make running VM.
one can create copy from both VM and template but in VM you need to clone it and in case of template you need to deploy it.
moving between different setup is easy with template.
Rest are already mentioned in link provided.
But first you need to search on your own and still have doubts than only ask, that's how we all learn.
Happy Learning!
Looking at these two scenarios:
Create a template from your active VM, then deploy from the template.
Deploy from the active VM directly.
As far as I know, there will be no difference in the end result if you run these scenarios in the near future. You'll still have to run a script in order to get your IPs setup, etc.
So what's the difference?
If you mess stuff up with your active VM, change things around or whatever, you lose the ability to deploy from the (good) setup you had.
Once you make a template from your active VM, that configuration is saved as a file on the ESX (or the storage, not 100% sure) and can be re-deployed in the future.
Related
I have created a few servers on google cloud. I made them in VM instances. They run the same script everyday but each server runs with different arguments.
However, when changes need to be made to them or updates, I have to do them one by one, all the changes are the same, only different arguments. Meaning I would ssh into the server, run apt updates, download some files, upload some files, change some arguments and test. Then I repeat this process on all the servers.
I would like to be able to keep one copy of the server somewhere which would upload to the rest, or make changes that would apply automatically to each server.
Is there some way I can achieve this? Update all the servers (apt update, or download new files or make changes to scripts) all at once?
I would suggest creating a managed instance group that uses an instance template to create the VMs. Then, you can roll out updates to MIGs.
You can provide a startup script stored on Cloud Storage and apply it to the running instances.
First of all, sorry if this thread is not appropiated in Stack Overflow, but I think that is the best place of all.
We are using Rancher to manage a microservices solution. Most of the containers are NodeJS + Express apps, but there are others like Mongo or Identity Server.
We use many environment variables like endpoints or environment constants and, when we upgrade some of the containers individually, we forget to include them (most of the times, the person who deploys an upgrade is not the person who made the new version).
So, we're looking a way to manage them. We know that using a Dockerfile could be the best way, but if we need to upgrade just one container, we think that is too many work for just a minor change.
TLDR; How do you manage your enviromental variables in Rancher? How do you document them or how you extract them automatically?
Thanks!
Applications in Rancher are generally managed using Stacks/Services. Dockerfile is used to build a container image. docker-compose/rancher-compose files are used to define the applications. The environment variables can be specified in docker-compose file.
When you upgrade a service in rancher, the environment variables information is carried forward and also it's possible to edit them before upgrade.
Also Rancher "Catalog" feature might be something useful for you. Checkout: https://rancher.com/docs/rancher/v1.6/en/catalog/
We have an ASP.NET website that we want to deploy (and remove) multiple instances of the site on the same IIS machine.
We also have a few number of customers that need to install the product on their system.
I was hoping WIX would be able to handle this, but it appears you can only have one instance installed at a time.
What options are available to me? Right now we use FinalBuilder to setup a generic "install package" which uses a batch file that a user populated with their environment settings, and uses tools like sed and awk to update config files and more scripts to deploy to IIS.
It works, but it's very cumbersome. I was hoping to find more of a GUI/command line interface to replace this process.
It sounds like MSDeploy will work for your use case. It can deploy multiple instances to the same IIS instance and can also delete instances.
The following post is specifically about service versioning but you could use the same technique to install several instances of a web app.
http://www.dotnetcatch.com/2016/03/03/simple-service-versioning-with-webdeploy/
Currently I am using ms-deploy to build and deploy on several machines using team-city. In my current scenario, I need to build, package and deploy on Dev. After this I need to deploy this package on test and Live servers (which are on different domain. I understand how we do it but problem is Web transformation only occurs for test and live configs if we build a package. It means if I want to use the same package that is created for Dev cannot be used, as web transformation only occurred for Dev web config. Also know that we can change web config when un-packaging but that parameters are very limited. We have a lot of changes not just the connection string or db changes.
Another solution is to add another step to build packages for test and live as part of Dev deployment but then it means a lot of copying on remote servers, once for test and once for live which is a lot of time consuming due to different domains.
Can you please guide what is the best solution in this scenario. So I can use team-city to publish to Dev and test and live using same package and different web configs in one go.
To configure items at deployment time which are not automatically created for you. You can add a file named parameters.xml to your project and extend what you want to make available at deployment time.
Here's some documentation on the approach Using Deployment Parameters for Web.Config File Settings.
I'm thinking of using Vagrant to develop Django applications, but I'm a little confused and I'm not sure if what I would like to do is even possible.
I installed the lucid32 box successfully and created a new "instance" of vagrant, with a Vagrantfile, some shared directories and forwarded ports.
The first issue is that this doesn't seem to me the best choice when working in a team. How can we (me and other 10 developers, for example) share the box so that every change to it is shared? For example, if in 6 months we need postgresql, I need to have it working without having to install postgresql 11 times.
Also, how can I make things (like: postgresql, django, this-service, etc.) to start when the box has started up? I don't think that I have to ssh it and manually start n times all the n things I need every time.
And finally: I don't understand well if things like puppet and chef are meant to completely substitute the manual installation (through pip or apt-get, for example). Is that so?
Thank you.
And I'm sorry for bad english. :-)
I would say that your choice of Vagrant already was a good start to what you are looking for, but now you need to dig a little deeper into either Chef or Puppet, to further automate your provisioning process.
I guess a good choice in your sceneraio would be to first put both, the Vagrantfile and the corresponding Puppet manifest under version control as part of your project. Additionally, all of the configurations concerning this machine should also be put into version control and/or be made available through some sort of artifact repository.
Second, establish the rule in the team that changes (at least these that should live on for longer) to the environment need to be checked in if they are considered ready for the other team members.
Concerning your second question and coming back to my opening: Puppet (which I like) or Chef are your tools of choice and can save you and your colleagues a lot of work in the future. I'll stick to Puppet here, as I don't know Chef too good.
With puppet, you can manage all of what you want, the installation of packages, changing configurations and ensuring that certain services are running, or in general that the system has the state you want it to be. Even better, if you or another team-member made some malicious chages to his/her box, you can just rollback the changes in your Vagrantfile/Puppet manifest, type in
vagrant destroy && vagrant up
and the box is easily taken back to the last versioned state.
For example, take the following manifest file:
package { "mysql-server-5.1":
ensure => present
}
file { "/etc/mysql/my.cnf":
owner => "root",
content => "http://myrepository.local/myProject/configurations/mysql/my.cnf",
require => Package["mysql-server-5.1"]
}
service { "mysql":
ensure => running,
subscribe => File["/etc/mysql/my.cnf"],
require => File["/etc/mysql/my.cnf"]
}
What this does is, it first of all checks the package mechanism of the OS in your box (the names in the example assume a recent Ubuntu) if the package "mysql-server-5.1" is installed, and if not it'll install it. Through the 'require' attribute, the second directive will be executed after the first (and only if it worked), changing the MySQL configuration to the one you have also checked in and/or published somewhere you can reach it (that could also be put into the same folder as the Vagrantfile, and will then be available in the box under /vagrant). The last step, which again only will be executed if the altering of the configuration worked, will ensure that the "mysql" service is up and running or is getting restarted if it already was running when the configuration was changed.
Now you can hook up this manifest in your Vagrantfile:
Vagrant::Config.run do |config|
config.vm.box = "lucid32"
config.vm.box_url = "http://files.vagrantup.com/lucid32.box"
config.vm.define "node1" do |cfg|
cfg.vm.network "10.23.5.11"
cfg.vm.provision :puppet do |puppet|
puppet.manifests_path = "manifests"
puppet.manifest_file = "node1.pp"
end
end
end
With all changes besides the 'trying-stuff-out' ones made to the environment like this, all team mebers are guaranted to have the same setup easily and reproducable just at their fingertips.
I personally like to try stuff out on the box by hand, and when I found the right setup and configuration, translate it into a Puppet manifest to have if ready for later use and sharing with team members.
As Puppet (and Chef also) can manage almost all you need (users, cron jobs, packages, services, files, ...) it is a good choice for exactly such problems, and you have the benefit to even be able to use the configurations to provision staging or testing environments later on if you choose to. Their are much more options with Puppet, and a read through the language guide should give you a good idea what more you can do with it.
Hope I could help :)