I have been working on this for days now, and I can't figure out what is wrong.
Everything else is working, but I get the "ECONNREFUSED" on redis.
I have follow intances running:
app01 ROLE: app
web01 ROLE: web
db01 ROLE:db:primary
redis01 ROLE:redis_master
redis02 ROLE:redis_slave
sidekiq01 ROLE:redis
Here is the error from the productionlog:
Redis::CannotConnectError (Error connecting to Redis on localhost:6379 (ECONNREFUSED)):
app/models/user.rb:63:in `send_password_reset'
app/controllers/password_resets_controller.rb:10:in `create'
Everything is set-up by using the rubber-gem.
I have tried to remove all instaces and start from the start two times. Also I have tried to make a custom security-rule, but i'm not shure if I did it right.
Please help me!
Bringing this post back from the dead because I found it when I was struggling with the same problem today. I resolved my problem by doing the following:
I added redis_slave or redis_master roles to the servers using cap rubber:add_role. I found this will add both the specified role, and the generic "redis" role. Assuming that you want redis01 to be the only redis_master after adding roles, I'd expect your environment to have:
app01 ROLE: app
web01 ROLE: web
db01 ROLE:db:primary
redis01 ROLE:redis_master
redis01 ROLE:redis
redis02 ROLE:redis_slave
redis01 ROLE:redis
sidekiq01 ROLE:redis_slave
sidekiq01 ROLE:redis
After setting up roles, I updated the servers with cap rubber:bootstrap
In my environment, I'm deploying code from git, so I had to commit these changes and run cap -s branch="branch_name_or_sha" deploy to get rubber/deploy-redis.rb on the servers with the new roles and execute it.
After doing all this, redis runs on all my nodes without throwing Redis::CannotConnectError (Error connecting to Redis on localhost:6379 (ECONNREFUSED)) error on any of them.
Good Luck!
Related
On a new installation of cassandra 3.0.20 on redhat 7 I can not list roles. I have tried the option of fixing /etc/alternatives/cassandra/cassandra.yaml with...
authenticator: PasswordAuthenticator
and then restart the service.
still when I run a simple command like LIST ROLES I get the following error.
cassandra#cqlsh> list roles;
Unauthorized: Error from server: code=2100 [Unauthorized] message="You have to be logged in and not anonymous to perform this request"
It turns out that systemctl was not completely stopping cassandra due to weirdness with Redhat 7 and the init file. Therefore the changes to my cassandra.yaml were not taking effect.
Once I killed cassandra, made a proper cassandra.service and restarted the desired settings took effect, and I am able to run operations like "LIST ROLES;" normally.
I encounter a problem while using EKS with fluent bit and I will be grateful for the community help, first I'll describe the cluster.
We are running EKS cluster in a VPC that had an unmanaged node group.
The EKS cluster network configuration is marked as "public and private" and
using fluent-bit with Elasticsearch service we show logs in Kibana.
We've decided that we want to move to managed node group in that cluster and therefore migrated from the unmanaged node group to a managed node group successfully.
Since our migration we cannot see any logs in Kibana, when getting the logs manually from the fluent bit pods there are no errors.
I toggled debug level logs for fluent bit to get better look at it.
I can see that fluent-bit gathers all the log files and then I saw that we get messages:
[debug] [out_es] HTTP Status=403 URI=/_bulk
[debug] [retry] re-using retry for task_id=63 attemps=3
[debug] [sched] retry=0x7ff56260a8e8 63 in 321 seconds
Furthermore, we have managed node group in other EKS clusters but we did not migrate to them they were created with managed node group.
The created managed node group were created from the same template we have from working managed node group with the only difference is the compute power.
The template has nothing special in it except auto scale.
I compared between the node group IAM role of working node group logs and my non working node group and the Roles seems to be the same.
As far for my fluent bit configuration I have the same configuration in few EKS clusters and it works so I don't think that the root cause but if anyone thinks something else I can add it if requested.
Someone had that kind of problem? why node group migration could cause such issue?
Thanks in advance!
Lesson learned, always look at the access policy of the resource you are having issue with, maybe it does not match your node group role
Need help in fixing this error
Faraday::ConnectionFailed (Connection reset by peer - SSL_connect):
this is logfile from /var/log/gitlab/gitlab-rails/production.log
i got this error when i'm trying to google auth to our gitlab-ce.
this is my environment:
- CentOS 7
- Gitlab 12.5
any help is appreciated :) Thanks
You had a similar error reported in gitlab-org/gitlab-foss issue 1924:
I had the exact same thing happen last night and it turned out that the /etc/resolv.conf within the Docker container was unreadable by the "git" user for the container.
This prevents it from resolving the host you're calling back to.
The ball started rolling after reading this issue: docker-gitlab issue 627.
In your case, you might not use a GitLab within Docker, in which case, check your proxy.
i build spinnaker using docker-compose follow here
but it always redirect to localhost, how can i fix this.
e.g.
http://localhost:8084/auth/redirect?to=http%3A%2F%2F192.168.99.100%3A9000%2F%23%2Finfrastructure
i set the host:0.0.0.0 in spinnaker-local.yml and configured deck apache2 with proxyPreserve=On, it's not working.
where is the configuration about 'redirect'?
All containers running well but fiat gets error mesages, like this:
WARN 1 --- [ecutionAction-1] c.n.s.fiat.roles.UserRolesSyncer : [] User permission sync failed. Server status is DOWN. Trying again in 10000 ms. Cause:(Provider: DefaultServiceAccountProvider) retrofit.RetrofitError: unexpected url: front50/serviceAccounts
i'm sure set fiat false, is this matter?
thanks.
The docker-compose link project is not available anymore. That deployment type is not supported anymore.
The easiest way i suggest for people to get started quick is by using Armory Open source Minnaker. It runs on top of a K3S small cluster and contains a functional spinnaker deployment.
Great way to get started.
I tried the debian local deployment and it failed all the time.
Enjoy your CD operations.
I was doing the failover testing of mongodb on my local environment. I have two mongo servers(hostname1, hostname2) and an arbiter.
I have the following configuration in my mongoid.yml file
localhost:
hosts:
- - hostname1
- 27017
- - hostname2
- 27017
database: myApp_development
read: :primary
use_activesupport_time_zone: true
Now when I start my rails application, everything works fine, and the data is read from the primary(hostname1). Then I kill the mongo process of the primary(hostname1), so the secondary(hostname2) becomes the primary and starts serving the data.
Then after some time I start the mongo process of hostname1 then it becomes the secondary in the replica set.
Now the primary(hostname2) and secondary(hostname1) are working all right.
The real problem starts here.
I kill the mongo process of my new primary(hostname2), but this time, the secondary(hostname1) does not become the primary, and any further requests to the rails application raises the following error
Cannot connect to a replica set using seeds hostname2
Please help. Thanks in advance.
** UPDATE: **
I entered some loggers in the mongo repl_connection class, and came across this.
When I boot the rails app, I have both the hosts in the seeds array, that the mongo driver keeps track of. But during the second failover only the host that went down is present in this array.
Hence I would also like to know, how and when one of the hosts get removed from the seed list.