How can I use the Chef JSON to set a redis and sidekiq configuration - redis

I'm using AWS OpsWorks for a Rails application with Redis and Sidekiq and would like to do the following:
Override the maxmemory config for redis
Only run Redis & Sidekiq on a selected EC2 instance
My current JSON config only has the database.yml overrides:
{
"deploy": {
"appname": {
"database": {
"username": "user",
"password": "password",
"database": "db_production",
"host": "db.host.com",
"adapter": "mysql2"
}
}
}
}

Override the maxmemory config for redis
Take a look and see if your Redis cookbook of choice gives you an attribute to set that / provide custom config values. I know the main redisio one lets you set config value, as I do it on my stacks (I set the path to the on disk cache, I believe)
Only run Redis & Sidekiq on a selected EC2 instance
This part is easy: create a Layer for Redis (or Redis/Sidekiq) and add an instance to that layer.
Now, because Redis is on a different instance than your Rails server, you won't necessarily know what the IP address for your Redis server is. Especially since you'll probably want to use the internal EC2 IP address vs the public IP address for the box (using the internal address means you're already inside the default firewall).
Sooo... what you'll probably need to do is to write a custom cookbook for your app, if you haven't already. In your attributes/default.rb write some code like this:
redis_instance_details = nil
redis_stack_name = "REDIS"
redis_instance_name, redis_instance_details = node["opsworks"]["layers"][redis_stack_name]["instances"].first
redis_server_dns = "127.0.0.1"
if redis_instance_details
redis_server_dns = redis_instance_details["private_dns_name"]
end
Then later in the attributes file set your redis config to your redis_hostname (maybe using it to set:
default[:deploy][appname][:environment_variables][:REDIS_URL] = "redis://#{redis_server_dns}:#{redis_port_number}"
Hope this helps!

Related

Logstash Redis Input connection refused

I'm trying to migrate a docker based redis container into AWS Elasticache. I have the Redis instance running and can connect via the redis CLI but when I setup the logstash with the following:
input {
redis {
host => "redis<domain>.cache.amazonaws.com"
data_type => "list"
key => "logstash"
codec => msgpack
}
}
It explodes with this:
[2022-02-02T13:52:27,575][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/logstash.conf"], :thread=>"#<Thread:0x547a32a1 run>"}
[2022-02-02T13:52:28,685][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.11}
[2022-02-02T13:52:28,701][INFO ][logstash.inputs.redis ][main] Registering Redis {:identity=>"redis://#redis<domain>.cache.amazonaws.com:6379/0 list:logstash"}
[2022-02-02T13:52:28,709][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-02-02T13:52:28,823][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-02-02T13:52:28,837][ERROR][logstash.inputs.redis ][main][08c8cf37082e202fd617f2bc3c642b630c437b5e58521b08cd412f29ed9a10e1] Unexpected error {:message=>"invalid uri scheme ''", :exception=>ArgumentError, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.5.1/lib/redis/client.rb:473:in `_parse_options'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.5.1/lib/redis/client.rb:94:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.5.1/lib/redis.rb:65:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:129:in `new_redis_instance'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:134:in `connect'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:186:in `list_runner'", "org/jruby/RubyMethod.java:131:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:87:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:409:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:400:in `block in start_input'"]}
but when I then use this configuration to provide the uri:
input {
redis {
host => "redis://redis<domain>.cache.amazonaws.com"
data_type => "list"
key => "logstash"
codec => msgpack
}
}
I get this:
[2022-02-02T13:57:10,475][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/logstash.conf"], :thread=>"#<Thread:0x20738737 run>"}
[2022-02-02T13:57:11,586][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.11}
[2022-02-02T13:57:11,600][INFO ][logstash.inputs.redis ][main] Registering Redis {:identity=>"redis://#redis://redis<domain>.cache.amazonaws.com:6379/0 list:logstash"}
[2022-02-02T13:57:11,605][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-02-02T13:57:11,724][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-02-02T13:57:11,843][WARN ][logstash.inputs.redis ][main][08c8cf37082e202fd617f2bc3c642b630c437b5e58521b08cd412f29ed9a10e1] Redis connection error {:message=>"Error connecting to Redis on redis://redis<domain>.cache.amazonaws.com:6379 (SocketError)", :exception=>Redis::CannotConnectError}
The latter error looks saner but the Registering Redis line looks messed up. But neither provide any insight as to why they can't connect, yet I can connect to the Redis instance from the pod. What am I missing here?
Turned out on top of the config, I also had an environment variable set called REDIS_URL that was trying to gazump the config as it's used in the Redis client.
From the readme, I finally discovered:
By default, the client will try to read the REDIS_URL environment
variable and use that as URL to connect to. The above statement is
therefore equivalent to setting this environment variable and calling
Redis.new without arguments.

How to use EKS with suitable volumes and resolve subnet IP insufficient issue on AWS?

I deployed an application in EKS. The deployment always pending, when I checked the events found these issues.
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
89s Warning FailedScheduling pod/awx-demo-111111111-122222 running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "awx-demo-projects-claim"
49m Warning FailedDeployModel ingress/awx-demo-ingress Failed deploy model due to InvalidSubnet: Not enough IP space available in subnet-031f9c702bc474e8f. ELB requires at least 8 free IP addresses in each subnet.
status code: 400, request id: 11111111-2222-3333-4444-555555555555
32m Warning FailedDeployModel ingress/awx-demo-ingress Failed deploy model due to InvalidSubnet: Not enough IP space available in subnet-01322i912fas0123na. ELB requires at least 8 free IP addresses in each subnet.
status code: 400, request id: 11111111-2222-3333-4444-555555555515
15m Warning FailedDeployModel ingress/awx-demo-ingress Failed deploy model due to InvalidSubnet: Not enough IP space available in subnet-031f9c702bc474e8f. ELB requires at least 8 free IP addresses in each subnet.
status code: 400, request id: 11111111-2222-3333-4444-555555555525
89s Normal WaitForPodScheduled persistentvolumeclaim/awx-demo-projects-claim waiting for pod awx-demo-111111111-122222 to be scheduled
21m Warning ProvisioningFailed persistentvolumeclaim/awx-demo-projects-claim Failed to provision volume with StorageClass "gp2": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported
It seems there are device issue and subnet issue. I created the EKS cluster and node group with these configurations:
resource "aws_eks_cluster" "this" {
encryption_config {
resources = ["secrets"]
provider {
key_arn = aws_kms_key.this.arn
}
}
enabled_cluster_log_types = ["api", "authenticator", "audit", "scheduler", "controllerManager"]
name = local.cluster_name
version = "1.20"
role_arn = aws_iam_role.eks_cluster.arn
vpc_config {
subnet_ids = [
data.aws_ssm_parameter.private_subnet_0_id.value,
data.aws_ssm_parameter.private_subnet_1_id.value,
]
security_group_ids = [aws_security_group.this.id]
endpoint_public_access = true
}
depends_on = [
aws_iam_role_policy_attachment.eks_cluster_policy,
aws_iam_role_policy_attachment.eks_vpc_resource_controller,
aws_iam_role_policy_attachment.eks_service_policy,
]
tags = merge(
local.tags,
)
}
resource "aws_eks_node_group" "this" {
cluster_name = local.cluster_name
node_group_name = local.node_group_name
node_role_arn = aws_iam_role.eks_nodes.arn
instance_types = ["m5.2xlarge"]
subnet_ids = [
data.aws_ssm_parameter.private_subnet_0_id.value,
data.aws_ssm_parameter.private_subnet_1_id.value,
]
scaling_config {
desired_size = 2
max_size = 2
min_size = 2
}
lifecycle {
ignore_changes = [scaling_config[0].desired_size]
}
depends_on = [
aws_iam_role_policy_attachment.eks_worker_node_policy,
aws_iam_role_policy_attachment.eks_cni_policy,
aws_iam_role_policy_attachment.ec2_container_register_readonly,
]
tags = merge(
local.tags,
)
}
I didn't define the volume type for EBS, maybe it's using the default setting. How to fix the issue?
For the VPC has insufficient IP addresses issue, if create a new subnet for EKS to use, is it necessary to delete the EKS cluster or node group?
By the way, the deployment I used was https://raw.githubusercontent.com/ansible/awx-operator/0.13.0/deploy/awx-operator.yaml.
The install was used https://github.com/ansible/awx-operator#basic-install.
#miantian, Continuing our discussion from the comments:
A subnet size cannot just be increased. If you change the subnet size, it will be recreated. But as the EKS is there, the subnet creation will fail. So, I would say - start fresh. Delete everything and then start fresh.
Regd the volume issue, by default EKS only supports ReadWriteOnce access mode. This is because of the technical limitation of AWS where an EBS volume can only be attached to 1 EC2 instance. If you want to use ReadWriteMany access mode, you need to use EFS.
If you want to use EFS, look up NFS/EFS client provisioner for EKS. There are few steps you need to follow in order to create an EFS provisioner in EKS. Then, you can start using ReadWriteMany access mode.

Establish Redis Connection in Phoenix

I need to establish a Redis connection when my Phoenix app initially loads. When reading the docs I thought that code would go in /config/dev.exs or /config/config.exs but the Redix dependency I am using as a Redis interface is not loaded in /config
Below results in a reference error in /config:
Redix.start_link("redis://localhost:6379/3", name: :redix)
I only want to call this once on app load. Where should I put this call in my Phoenix app?
Adding {Redix, name: :redix} to children array in application.ex adds redix process to the supervisor tree. Which means it will start along with your application:
children = [
# Start the Ecto repository
MyApp.Repo,
# Start the Telemetry supervisor
AppWeb.Telemetry,
# Start the PubSub system
....
# Single Redis connection
{Redix, name: :redix}
]
See https://hexdocs.pm/redix/real-world-usage.html
You can check in iex -S mix:
iex(1)> Redix.command(:redix, ["PING"])
{:ok, "PONG"}
Now you can use all the regular Redix commands: https://hexdocs.pm/redix/readme.html#usage

ServiceStack.Redis.Sentinel Usage

I'm running a licensed version of ServiceStack and trying to get a sentinel cluster setup on Google Cloud Compute.
The cluster is basically GCE's click-to-deploy redis solution - 3 servers. Here is the code i'm using to initialize...
var hosts = Settings.Redis.Host.Split(';');
var sentinel = new ServiceStack.Redis.RedisSentinel(hosts, "master");
redis = sentinel.Setup();
container.Register<IRedisClientsManager>(redis);
container.Register<ICacheClient>(redis.GetCacheClient());
The client works fine - but once i shut down one of the redis instances everything craps the bed. The client complains about not being able to connect to the missing instance. Additionally, even when i bring the instance back up - it is in READ ONLY mode, so everything still fails. There doesn't seem to be a way to recover once you are in this state...
Am i doing something wrong? Is there some reason that RedisSentinal client doesn't figure out who the new master is? I feed it all 3 host IP addresses...
You should only be supplying the host of the Redis Sentinel Server to RedisSentinel as it gets the active list of other master/slave redis servers from the Sentinel host.
Some changes to RedisSentinel were recently added in the latest v4.0.37 that's now available on MyGet which includes extra logging and callbacks of Redis Sentinel events. The new v4.0.37 API looks like:
var sentinel = new RedisSentinel(sentinelHost, masterName);
Starting the RedisSentinel will connect to the Sentinel Host and return a pre-configured RedisClientManager (i.e. redis connection pool) with the active
var redisManager = sentinel.Start();
Which you can then register in the IOC with:
container.Register<IRedisClientsManager>(redisManager);
The RedisSentinel should then listen to master/slave changes from the Sentinel hosts and failover the redisManager accordingly. The existing connections in the pool are then disposed and replaced with a new pool for the newly configured hosts. Any active connections outside of the pool they'll throw connection exceptions if used again, the next time the RedisClient is retrieved from the pool it will be configured with the new hosts.
Callbacks and Logging
Here's an example of how you can use the new callbacks to introspect the RedisServer events:
var sentinel = new RedisSentinel(sentinelHost, masterName)
{
OnFailover = manager =>
{
"Redis Managers were Failed Over to new hosts".Print();
},
OnWorkerError = ex =>
{
"Worker error: {0}".Print(ex);
},
OnSentinelMessageReceived = (channel, msg) =>
{
"Received '{0}' on channel '{1}' from Sentinel".Print(channel, msg);
},
};
Logging of these events can also be enabled by configuring Logging in ServiceStack:
LogManager.LogFactory = new ConsoleLogFactory(debugEnabled:false);
There's also an additional explicit FailoverToSentinelHosts() that can be used to force RedisSentinel to re-lookup and failover to the latest master/slave hosts, e.g:
var sentinelInfo = sentinel.FailoverToSentinelHosts();
The new hosts are available in the returned sentinelInfo:
"Failed over to read/write: {0}, read-only: {1}".Print(
sentinelInfo.RedisMasters, sentinelInfo.RedisSlaves);

logstash and centralised redis problems

I'm trying to get logstash working in a centralised setup using the docs as an example:
http://logstash.net/docs/1.2.2/tutorials/getting-started-centralized
I've got logstash (as indexer), redis, elasticsearch and standalone kibana3 running on my web server. I then need to run logstash as an agent on another server to collect apache logs and send them to the web server via redis. The number of agents will increase and the logs will vary, but for now I just want to get this working!
I need everything to run as a service so that all is well after reboots etc. All servers are running Ubuntu.
For all logstash instances (indexer and agent), I'm using the following init script (Ubuntu version, second gist):
https://gist.github.com/shadabahmed/5486949#file-logstash-ubuntu
For running redis as a service, I followed the instructions here:
http://redis.io/topics/quickstart (Installing redis more properly)
Elasticsearch is also running as a service.
On the web server, running redis-cli returns PONG correctly. Navigating to the correct Elasticsearch URL returns the correct JSON response. Navigating to the Kibana3 url gives me the dashboard, but no data. UFW is set to allow the redis port (at the moment from everywhere).
On the web server, my logstash.conf is:
input {
file {
path => "/var/log/apache2/access.log"
type => "apache-access"
sincedb_path => "/etc/logstash/.sincedb"
}
redis {
host => "127.0.0.1"
data_type => "list"
key => "logstash"
codec => json
}
}
filter {
grok {
type => "apache-access"
pattern => "%{COMBINEDAPACHELOG}"
}
}
output {
elasticsearch {
embedded => true
}
statsd {
# Count one hit every event by response
increment => "apache.response.%{response}"
}
}
From the agent server, I can telnet successfully to the web server IP and redis port. logstash is running. The logstash.conf file is:
input {
file {
path => "/var/log/apache2/shift.access.log"
type => "apache"
sincedb_path => "/etc/logstash/since_db"
}
stdin {
type => "example"
}
}
filter {
if [type] == "apache" {
grok {
pattern => "%{COMBINEDAPACHELOG}"
}
}
}
output {
stdout { codec => rubydebug }
redis { host => ["xx.xx.xx.xx"] data_type => "list" key => "logstash" }
}
If I comment out the stdin and stdout lines, I still don't get a result. The logstash logs do not give me any connection errors - only warnings about the deprecated grok settings format.
I have also tried running logstash from the command line (making sure to stop the demonised service first). The apache log file is correctly outputted in the terminal, so I know that logstash is accessing the log correctly. And I can write random strings and they are output in the correct logstash format.
The redis logs on the web server show no sign of trouble......
The frustrating thing is that this has worked once. One message from stdin made it all the way through to elastic search. That was this morning just after getting everything setup. Since then, I have had no luck and I have no idea why!
Any tips/pointers gratefully received... Solving my problem will stop me tearing out more of my hair which will also make my wife happy......
UPDATE
Rather than filling the comments....
Thanks to #Vor and #rutter, I've confirmed that the user running logstash can read/write to the logstash.log file.
I've run the agent with -vv and the logs are populated with e.g.:
{:timestamp=>"2013-12-12T06:27:59.754000+0100", :message=>"config LogStash::Outputs::Redis/#host = [\"XX.XX.XX.XX\"]", :level=>:debug, :file=>"/opt/logstash/logstash.jar!/logstash/config/mixin.rb", :line=>"104"}
I then input random text into the terminal and get stdout results. However, I do not see anything in the logs until AFTER terminating the logstash agent. After the agent is terminated, I get lines like these in the logstash.log:
{:timestamp=>"2013-12-12T06:27:59.835000+0100", :message=>"Pipeline started", :level=>:info, :file=>"/opt/logstash/logstash.jar!/logstash/pipeline.rb", :line=>"69"}
{:timestamp=>"2013-12-12T06:29:22.429000+0100", :message=>"output received", :event=>#<LogStash::Event:0x77962b4d #cancelled=false, #data={"message"=>"test", "#timestamp"=>"2013-12-12T05:29:22.420Z", "#version"=>"1", "type"=>"example", "host"=>"Ubuntu-1204-precise-64-minimal"}>, :level=>:info, :file=>"(eval)", :line=>"16"}
{:timestamp=>"2013-12-12T06:29:22.461000+0100", :level=>:debug, :host=>"XX.XX.XX.XX", :port=>6379, :timeout=>5, :db=>0, :file=>"/opt/logstash/logstash.jar!/logstash/outputs/redis.rb", :line=>"230"}
But while I do get messages in stdout, I get nothing in redis on the other server. I can however telnet to the correct port on the other server, and I get "ping/PONG" in telnet, so redis on the other server is working..... And there are no errors etc in the redis logs.
It looks to me very much like the redis plugin on the logstash shipper agent is not working as expected, but for the life of me, I can't see where the breakdown is coming from.....