How to connect to mongo database in heroku from ec2 - ruby-on-rails-3

I am working on Ruby on Rails app with Mongodb .My app is deployed on heroku and for delayed jobs i am using amazon ec2. Things I have a doubt
1)How to connect to the mongo database in amazon ec2 which is basically at heroku?
2)When i run delayed jobs how it will went to amazon server what are the changes i have to make to the app? If somebody can point me tutorial for this.

If you want to make your EC2 instance visible to your application on Heroku, you need to add your instance to Heroku's security group from Amazon. There are some instructions in Heroku's documentation that explain how to connect to external services like this.
https://devcenter.heroku.com/articles/dynos#connecting-to-external-services
In the case of MongoDB running on its default ports, you'd want to do something like this:
$ ec2-authorize YOURGROUP -P tcp -p 27017 -u 098166147350 -o default
As for how to handle your delayed jobs running remotely on the EC2 instance, you might find this article from the Artsy engineering team helpful. It sounds like they developed a fairly similar setup.
http://artsy.github.io/blog/2012/01/31/beyond-heroku-satellite-delayed-job-workers-on-ec2/

Related

How to test throughput of application using rabbitmq?

I have application that uses rabbitmq to queue messages for other parts of ecosystem. I would like to do some performance testing and tuning, but just on my part (the program). So I guess I would like to somehow "mock" away the rabbitmq server, but without changes to my application.
Is there something like dummy rabbitmq server that just accepts all messages and throws them away immediately? Or can I configure actual rabbitmq in that way?
I was using local docker image for the performance test. You can run it with the command:
docker run -d -p 8081:15672 rabbitmq:3-management
You can access management gui on localhost:8081, default username and password is guest/guest
After you are done running a performance test you can purge queue. You do that in Queues>your queue>Purge
PS: Port can be anything you want, just change 8081 in the docker command :)

What is the redis URI, when redis is used in kubernetes?

Objective
I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.
Question
How do I get the right URI, when redis is running on a Pod in Kubernetes?
Situation
I used this sample to setup the redis database in kubernetes This is the link to the sample in Kubernetes
I run Kuberentes inside IBM Cloud.
Findings
I was not able to find a answer to my question on the redis documentation
As far as I understand by default no password configured.
Is this assumption right?
redis://[USER]:[PASSWORD]#[CLUSTER-PUBLIC-IP]:[PORT]
Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)
As far as I understand by default no password configured.
Yes, there is no default password in that image with Redis, you are right.
If you following the instruction you mentioned, you will use a kubectl proxy, which will forward port of your Redis in cluster to your local machine by call kubectl port-forward redis-master 6379:6379.
So in that case, Redis will be available on redis://localhost:6379 on your PC.
If you want to make it available directly from ouside of the cluster, you need to create Service with NodePort, Service with LoadBalancer (if you in Cloud) or simply Service with Ingress.
Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:
redis://[USER]:[PASSWORD]#[SERVICE-IP]:[PORT]
Here is a good official documentation about connecting applications with service.

How to setup dedicated Sidekiq server?

I have a rails app which deployed on 2 ec2 instance with nginx and capistrano. For background job, I had used sidekiq with redis. I have 50 gb memory on server. I also have set max_pool_size to 50 of sidekiq concurrency. I want use one instance as a dedicated server for Sidekiq. How shall I do that?
You wouldn't be able to have a Rails app and a Sidekiq server. You can have a Rails app that connects to an instance which also has a copy of your Rails app and a Sidekiq server. The only thing you'd have to do on the Sidekiq server is not run rails but instead only run sidekiq.
In my opinion, you're complicating things by wanting it like this. You're better having both on the same server and have a separate ElasticCache Redis server running to which your Sidekiq can connect.

Docker Swarm - load-balancing to closest node first

I'm trying to optimize Docker-Swarm load-balancing in a way that it will first route requests to services by the following priority
Same machine
Same DC
Anywhere else.
Given the following setup:
DataCenter-I
Server-I
Nginx:80
Server-II
Nginx:80
Worker
DataCenter-II
Server-I
Nginx:80
Worker
In case and DataCenter-I::Server-II::Worker will issue an API request over port 80, The desired behavior is:
Check if there are any tasks (containers) mapped to port:80 on local server (DataCenter-I::Server-II)
Fallback and check in local DataCenter (i.e DataCenter-I::Server-I)
Fallback and check in all clusters (i.e DataCenter-II::Server-I)
This case is very useful when using workers and response time doesn't matter while bandwidth does.
Please advise,
Thanks!
According to this question I asked before, docker swarm is currently only using round-robin and no indication to be pluginable yet.
However, Nginx Plus support least_time load balancing method, which I think there will be an similar open-source module, and it is similar to what you need, with perhaps the least effort.
ps: Don't run Nginx with the docker swarm. Instead, run Nginx with regular docker or docker-compose in the same docker network of your app. You don't want docker swarm to load balancing your load balancer.

Retrieve application config from secure location during task start

I want to make sure I'm not storing sensitive keys and credentials in source or in docker images. Specifically I'd like to store my MySQL RDS application credentials and copy them when the container/task starts. The documentation provides an example of retrieving the ecs.config file from s3 and I'd like to do something similar.
I'm using the Amazon ECS optimized AMI with an auto scaling group that registers with my ECS cluster. I'm using the ghost docker image without any customization. Is there a way to configure what I'm trying to do?
You can define a volume on the host and map it to the container with Read only privileges.
Please refer to the following documentation for configuring ECS volume for an ECS task.
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
Even though the container does not have the config at build time, it will read the configs as if they are available in its own file system.
There are many ways to secure the config on the host OS.
In my past projects, I have achieved the same by disabling ssh into the host and injecting the config at boot-up using cloud-init.