I'm thinking of using Vagrant to develop Django applications, but I'm a little confused and I'm not sure if what I would like to do is even possible.
I installed the lucid32 box successfully and created a new "instance" of vagrant, with a Vagrantfile, some shared directories and forwarded ports.
The first issue is that this doesn't seem to me the best choice when working in a team. How can we (me and other 10 developers, for example) share the box so that every change to it is shared? For example, if in 6 months we need postgresql, I need to have it working without having to install postgresql 11 times.
Also, how can I make things (like: postgresql, django, this-service, etc.) to start when the box has started up? I don't think that I have to ssh it and manually start n times all the n things I need every time.
And finally: I don't understand well if things like puppet and chef are meant to completely substitute the manual installation (through pip or apt-get, for example). Is that so?
Thank you.
And I'm sorry for bad english. :-)
I would say that your choice of Vagrant already was a good start to what you are looking for, but now you need to dig a little deeper into either Chef or Puppet, to further automate your provisioning process.
I guess a good choice in your sceneraio would be to first put both, the Vagrantfile and the corresponding Puppet manifest under version control as part of your project. Additionally, all of the configurations concerning this machine should also be put into version control and/or be made available through some sort of artifact repository.
Second, establish the rule in the team that changes (at least these that should live on for longer) to the environment need to be checked in if they are considered ready for the other team members.
Concerning your second question and coming back to my opening: Puppet (which I like) or Chef are your tools of choice and can save you and your colleagues a lot of work in the future. I'll stick to Puppet here, as I don't know Chef too good.
With puppet, you can manage all of what you want, the installation of packages, changing configurations and ensuring that certain services are running, or in general that the system has the state you want it to be. Even better, if you or another team-member made some malicious chages to his/her box, you can just rollback the changes in your Vagrantfile/Puppet manifest, type in
vagrant destroy && vagrant up
and the box is easily taken back to the last versioned state.
For example, take the following manifest file:
package { "mysql-server-5.1":
ensure => present
}
file { "/etc/mysql/my.cnf":
owner => "root",
content => "http://myrepository.local/myProject/configurations/mysql/my.cnf",
require => Package["mysql-server-5.1"]
}
service { "mysql":
ensure => running,
subscribe => File["/etc/mysql/my.cnf"],
require => File["/etc/mysql/my.cnf"]
}
What this does is, it first of all checks the package mechanism of the OS in your box (the names in the example assume a recent Ubuntu) if the package "mysql-server-5.1" is installed, and if not it'll install it. Through the 'require' attribute, the second directive will be executed after the first (and only if it worked), changing the MySQL configuration to the one you have also checked in and/or published somewhere you can reach it (that could also be put into the same folder as the Vagrantfile, and will then be available in the box under /vagrant). The last step, which again only will be executed if the altering of the configuration worked, will ensure that the "mysql" service is up and running or is getting restarted if it already was running when the configuration was changed.
Now you can hook up this manifest in your Vagrantfile:
Vagrant::Config.run do |config|
config.vm.box = "lucid32"
config.vm.box_url = "http://files.vagrantup.com/lucid32.box"
config.vm.define "node1" do |cfg|
cfg.vm.network "10.23.5.11"
cfg.vm.provision :puppet do |puppet|
puppet.manifests_path = "manifests"
puppet.manifest_file = "node1.pp"
end
end
end
With all changes besides the 'trying-stuff-out' ones made to the environment like this, all team mebers are guaranted to have the same setup easily and reproducable just at their fingertips.
I personally like to try stuff out on the box by hand, and when I found the right setup and configuration, translate it into a Puppet manifest to have if ready for later use and sharing with team members.
As Puppet (and Chef also) can manage almost all you need (users, cron jobs, packages, services, files, ...) it is a good choice for exactly such problems, and you have the benefit to even be able to use the configurations to provision staging or testing environments later on if you choose to. Their are much more options with Puppet, and a read through the language guide should give you a good idea what more you can do with it.
Hope I could help :)
Related
I am learning puppet and am trying to write modules to install services such as tigervnc and openvpn.
The problem is that for tigervnc requires the initial password setting by the user. I have tried using:
"exec {'/usr/bin/echo password | /usr/bin/vncpasswd > ~/.vnc/passwd"
This works if I run it on the command line if I'm logged in as the user but does not work when run via puppet.
The problem with openvnc is that it requires a lot of user interaction for the default settings for certificate generation/certificate authority and key generation.
I have tried using execs with the "pkitool" methods which work to a point but not very well or stable. I am also wary of using many execs if there is a better way to do it.
So to sum up my main question is how to deal with these user interactions when trying to automate installations with puppet, and is there a better way than running lots of execs which to me seem like a last resort ?
Thanks
If setting up a piece of software requires user interaction, I don't really see a way around exec. Keeping its use to a minimum is indeed a sensible design goal.
An economic approach is to
create a script that does all the necessary lifting that Puppet resources cannot perform
make Puppet deploy that script to the agent
run it at appropriate times via exec (along with good creates or onlyif queries)
Scripts that run installation wizards that rely on interactive input should probably rely on expect and friends.
As in the topic.
I wonder since I cannot find this information anywhere and currently I am using a virtual machine (linux) on my vcenter which is cloned and then a special shell script is run on this freshly cloned machine to setup up environment and IP adresses etc.
Maybe I would be able to benefit from templates this way.
I think this will be helpful
https://www.robertparten.com/virtualization/vmware-difference-between-clone-and-template/
Few Differences in my opinion:-
Virtual machine is the running instance while Template is compact copy of VM ( with baseline and factory settings), which can be stored anywhere.
one need to deploy template to make running VM.
one can create copy from both VM and template but in VM you need to clone it and in case of template you need to deploy it.
moving between different setup is easy with template.
Rest are already mentioned in link provided.
But first you need to search on your own and still have doubts than only ask, that's how we all learn.
Happy Learning!
Looking at these two scenarios:
Create a template from your active VM, then deploy from the template.
Deploy from the active VM directly.
As far as I know, there will be no difference in the end result if you run these scenarios in the near future. You'll still have to run a script in order to get your IPs setup, etc.
So what's the difference?
If you mess stuff up with your active VM, change things around or whatever, you lose the ability to deploy from the (good) setup you had.
Once you make a template from your active VM, that configuration is saved as a file on the ESX (or the storage, not 100% sure) and can be re-deployed in the future.
I have a chef recipe that can either run on virtual machines or real machines. I need to be able to tell the difference between them inside chef, because I need to treat them differently. I've found something on the internet that said I should just use
if node[:instance_role] == 'vagrant'
but that doesn't seem to work for me. node[:instance_role] is just blank.
Do you know any other way of doing it?
I'm using chef_solo with vagrant provisioning
Look under node['virtualization'] for information about the VM runtime. Vagrant isn't actually a VM system though, so you won't see anything about that. A better option for Vagrant-specific behavior is to set a node attribute in your Vagrantfile and reference that in your recipe code.
I installed Apache HTTP Server on our Windows system, to work on a home project; it's for use by "localhost" only. When I installed it, the two options were to install it as a service, for all users, using port 80; or to install it for just the current user, run manually, using port 8080. I selected the second. However, while I'd prefer for it to use port 8080 and be run manually, I'd like it to be set up so that my wife can run it as her user. (Allowing all users would be OK.) I don't see an httpd.conf entry for this. Is there a way to do this either through httpd.conf or a command-line option? I'm guessing I could do this in the registry but I don't want to mess with it if I don't have to. (P.S. There's no need to have multiple instances run simultaneously.)
There's nothing you can do from within httpd.conf; any settings in there affects the server itself and not how it is accessed by a program
Well, you have a few options:
1. Uninstall the software and re-install it choosing the all users option. That would be your best choice.
2. Found the location of the folder where it was installed (or where apache.exe is located as that is the needed file to run) and see if you can create a shortcut link into it from within your wife's account. Apache server doesn't care who runs it as long as that file can be executed. The problem you might face is Windows OS preventing you from running it, especially if it requires administrative rights.
3. Install a software such as WAMPServer for her. Of course, that means two similar software on the same machine.
If I have to do it, I would go the first route. Every other option is gonna be a little complicated to work with.
Hope the explanation is clear and the answer helps.
We have developed a somewhat diffuse system for handling component installation and upgrades across server environments in an automated manner. It worked happily on our development environment, but I've run into a new problem I've not seen before when attempting to deploy it to a live environment.
The environment in question comprises ten servers, five each on two different geographical sites and domains. Each server runs a WCF based windows service that allows it to talk to each of the other servers and thus keep a track of what's installed where. To facilitate this process we make use of machine level environment variables - and modifying these obviously means registry changes.
Having got all this set up, my first attempts to use the system to install stuff seemed to work, but on one box in particular I'm getting "Requested registry access is not allowed" errors when the code tries to modify the environment variables. I've googled this, obviously, but there seem to be a variety of different causes and I'm really not sure which are the applicable ones. It doesn't help that this is a live environment and that our system has relatively limited internal logging capability.
The only clue I've got is that the guy who did the install on the development boxes wrote a very patch set of documentation on the process. This includes an instruction to modify the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LocalAccountTokenFilterPolicy value in the registry and set it to 1. I skipped this during the installation as it looked like a rather dubious security risk. Reading the documentation about this key, it looks relevant but my initial attempts at installing stuff on other boxes without this setting enabled worked fine. Sadly the author went on extended leave over the holidays yesterday and he left no explanation of why this key was needed, so we're a bit in the dark.
Can anyone help us toward the light?
Cheers,
Matt
I've seen this error when code tries to write to the event log using something like EventLog.WriteEntry() and a source that is not a registered event source is specified. When a source is specified that has not previously been registered, it will attempt to register the source, which involves writing to the registry.
I would suggest taking a look at SysInternals Process Monitor:
http://technet.microsoft.com/en-us/sysinternals/bb896645
You can use this to monitor registry access and find out what key you're getting the access denied error on. This may give you some insight as to what is causing the problem.
Essentially he's disabling part of the Remote User Account Control. Without setting the value, Remote UAC strips administrative privileges from account tokens remotely accessing the machine. Yes, it does have security implications. See Description of User Account Control and remote restrictions in Windows Vista for an explanation.