Rails rubber deployment on existing EC2 instance - ruby-on-rails-3

I usually use rubber gem to deploy to Amazon EC2. However, now I want to deploy to an existing EC2 instance which is already running. I could not find any reference on internet. All that I could find uses cap rubber:create_staging or cap rubber:create, but I dont want to create a new instance.
Any help?

run cap deploy
As long as your already have your production instance configured, capistrano should deploy the updates.

Related

Creating a kubernetes cluster on GCP using Spinnaker

For end to end devops automation I want to have an environment on demand. For this I need to Spun up and environment on kubernetes which is eventually hosted on GCP.
My Use case
1. Developer Checks in the code in feature branch
2. Environment in Spun up on Google Cloud with Kubernetes
3. Application gets deployed on Kubernetes
4. Gets tested and then the environment gets destroyed.
I am able to do everything with Spinnaker except #2. i.e create Kube Cluster on GCP using Spinnaker.
Any help please
Thanks,
Amol
I'm not sure Spinnaker was meant for doing what the second point in your list. Spinnaker assumes a collection of resources (VM's or a Kubernetes cluster) and then works with that. So instead of spinning up a new GKE cluster Spinnaker makes use of existing clusters. I think it'd be better (for you costs as well ;) if you seperate the environments using Kubernetes namespaces.

DC/OS running a service on each agent

Is there any way of running a service (single instance) on each deployed agent node? I need that because each agent needs to mount a storage from S3 using s3fs
The name of the feature you're looking for is "daemon tasks", but unfortunately, it's still in the planning phase for Mesos itself.
Due to the fact that schedulers don't know the entire state of the cluster, Mesos needs to add a feature to enable this functionality. Once in Mesos it can be integrated with DC/OS.
The primary workaround is to use Marathon to deploy an app with the UNIQUE constraint ("constraints": [["hostname", "UNIQUE"]]) and set the app instances to the number of agent nodes. Unfortunately this means you have to adjust the instances number when you add new nodes.

Are there any ansible module(s) to manage OpenStack Load-balancers (OpenStack LBaas)?

I want to define a pool in Openstack LBaas (Load Balancer as a Service) and then assign a VIP to it in order to create a Load-Balance cluster of servers. I want to automate this things using Ansible. I am looking for Ansbile modules that could help achieve the required thing.
Ansible don't provide a core module for Neutron management yet and it doesn't appear on the openstack-ansible github project.
Checking the TODO for the openstack-ansible project shows that they are still planning on working on adding Neutron LBaas configuration.
Ansible 2.7 now provides what you need if you have Octavia installed and enabled on your OpenStack Cloud
Add/Delete load balancer from OpenStack Cloud:
https://docs.ansible.com/ansible/latest/modules/os_loadbalancer_module.html#os-loadbalancer-module
Add/Delete a listener for a load balancer from OpenStack Cloud
https://docs.ansible.com/ansible/latest/modules/os_listener_module.html#os-listener-module
Add/Delete a pool in the load balancing service from OpenStack Cloud
https://docs.ansible.com/ansible/latest/modules/os_pool_module.html#os-pool-module

Resolving Chef Dependencies

In my lab, I am currently managing a 20 nodes cluster with Cobbler and Chef. Cobbler is used for OS provisioning and basic network settings, which is working fine as expected. I can manage several OS distributions with preseed-based NQA installation and local repo mirroring.
We also successfully installed chef server and started managing nodes but chef is not working as I expected. The issue is that I am not being able to set node dependencies within chef. Our one important use case is this:
We are setting up ceph and openstack on these nodes
Ceph should be installed before openstack because openstack uses ceph as back-end storage
Ceph monitor should be installed before Ceph osd because creating osd requires talking to monitor
The dependencies between Openstack and Ceph does not matter because it is a dependency in one node; just installing openstack later would resolve the issue.
However, a problem arises with the dependency between ceph monitor and ceph osd. Ceph osd provisioning requires a running ceph monitor. Therefore, ceph osd recipe should always be run after ceph mon recipe finishes in another node. Our current method is just to run "chef-client" in "ceph-osd" node after "chef-client" run completely finishes in "ceph-mon" node but I think this is a too much of a hassle. Is there a way to set these dependencies in Chef so that nodes will provision sequentially according to their dependencies? If not, are there good frameworks who handles this?
In chef itself, I know no method for orchestrating (that's not chef Job).
A workaround given your use case could be to use tags and search.
You monitor recipe could tag the node at end (with tag("CephMonitor") or with setting any attribute you wish to search on).
After that the solr index of chef has to catch it up (usually in the minute) and you can use search in the Cephosd recipe you can do something like this:
CephMonitor = search(:node,"tags:CephMonitor") || nil
return if CephMonitor.nil?
[.. rest of the CephOsd recipe, using the CephMonitor['fqdn'] or other attribute from the node ..]
The same behavior can be used to avoid trying to run the OpenStack recipe until the osd has run.
The drawback if that it will take 2 or 3 chef run to get to a converged infrastructure.
I've nothing to recommend to do the orchestration, zookeeper or consul could help instead of tags and to trigger the runs.
Rundeck can tage the runs on different nodes and aggregate this in one job.
Which is best depends on your feeling there.

How to connect to mongo database in heroku from ec2

I am working on Ruby on Rails app with Mongodb .My app is deployed on heroku and for delayed jobs i am using amazon ec2. Things I have a doubt
1)How to connect to the mongo database in amazon ec2 which is basically at heroku?
2)When i run delayed jobs how it will went to amazon server what are the changes i have to make to the app? If somebody can point me tutorial for this.
If you want to make your EC2 instance visible to your application on Heroku, you need to add your instance to Heroku's security group from Amazon. There are some instructions in Heroku's documentation that explain how to connect to external services like this.
https://devcenter.heroku.com/articles/dynos#connecting-to-external-services
In the case of MongoDB running on its default ports, you'd want to do something like this:
$ ec2-authorize YOURGROUP -P tcp -p 27017 -u 098166147350 -o default
As for how to handle your delayed jobs running remotely on the EC2 instance, you might find this article from the Artsy engineering team helpful. It sounds like they developed a fairly similar setup.
http://artsy.github.io/blog/2012/01/31/beyond-heroku-satellite-delayed-job-workers-on-ec2/