How to check the if config command name is changed in AWS Elasticache(REDIS) - amazon-elasticache

I am trying to access AWS elasticache(REDIS). I followed this instruction:
https://redsmin.uservoice.com/knowledgebase/articles/734646-amazon-elasticache-and-redsmin
Redis is connected now but when I click on configuration. I got this error:
"Redsmin can't load the configuration. Check with your provider that you have access to the configuration command."
edit 1:

config Redis command is sadly not available on AWS Elasticache, see their documentation:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/RestrictedCommands.html
To deliver a managed service experience, ElastiCache restricts access to certain cache engine-specific commands that require advanced privileges. For cache clusters running Redis, the following commands are unavailable:
[...]
config
That's why Redsmin configuration module (it's the only module impacted) cannot display current your Redis AWS Elasticache configuration.

Related

TLS communication between ignite pods

I build ignite cluster in my kubernetes VM. I want to configure my ignite pods to work with tls without certificate validation.
I created a key store manually in each pod which are binary files how can I add them to be created as part of the chart deploy?
should I create the file before and add to configmap? or run a shell command during build to create them?
can you please assist I'm new to kubernetes and SSL/TLS
You need to configure your node to use appropriate ssl/tls settings per this guide: https://ignite.apache.org/docs/latest/security/ssl-tls
docs for using a configmap to create an ignite configuration file for a node: https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-configmap-for-node-configuration-file
You could set up the ssl/tls files as configmaps, or alternatively, use a stateful set and create a persistentvolume to hold the files. https://kubernetes.io/docs/concepts/storage/persistent-volumes/
See the tab on https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-pod-configuration for instructions on how to mount persistent volumes for an Ignite Stateful set.

Why does Spinnaker need to list namespaces when the namespaces are already known to it?

Kubernetes namespaces are configured before Spinnaker is even deployed, so Spinnaker should be able to deploy into them in a namespace-restricted enterprise environment. But this answer says Spinnaker will not run in that setting: Spinnaker with restricted namspace access
Why does Spinnaker require read access to namespaces when those names are already known to it?
Why does the error response contain the name of the namespace that it is trying to list?
I forked halyard so that it uses client.pods().list() to verify the k8 connection and it is able to deploy Spinnaker. Spinnaker seems to work as long as it takes namespace names from its cache. When it uses live-manifest-calls or refreshes its cache, namespace pulldowns stop working.
You don't need it actulally. Just proper configuration for Halyard and Spinnaker.
See instruction.
Configure Spinnaker to install in Kubernetes
Important: This will by default limit your Spinnaker to deploying to the namespace specified. If you want to be able to deploy to other namespaces, either add a second cloud provider target or remove the --namespaces flag.
Use the Halyard hal command line tool to configure Halyard to install Spinnaker in your Kubernetes cluster
hal config deploy edit \
--type distributed \
--account-name ${ACCOUNT_NAME} \
--location ${NAMESPACE}

Spinnaker AWS Provider not allowing create cluster

Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.
Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/

Using AWS Elasticache Redis to manage sessions in Sails.js

I'm currently using connect-redis in my Sails.js project to leverage a locally-installed redis instance. In the future, I'd like to use a common redis instance for multiple server instances (behind a load balancer), so I've been looking at AWS Elasticache. I'm having trouble with the configuration, though.
sails-project\config\session.js:
adapter: 'connect-redis',
host: 'primary-endpoint.xxxxxx.ng.0001.apse1.cache.amazonaws.com',
port: 6379,
ttl: <redis session TTL in seconds>,
db: 0,
pass: <redis auth password>,
prefix: 'sess:',
What should the TTL value be? Should the pass attribute point to IAM somehow?
I tried creating a user in IAM with AmazonElastiCacheFullAccess permissions and putting its access key ID in the pass attribute, but I got this error in my server console (testing on my Windows box):
C:\repos\sails-project\node_modules\connect-redis\lib\connect-redis.js:83
throw err;
^
AbortError: Redis connection lost and command aborted. It might have been processed.
at RedisClient.flush_and_error (C:\repos\sails-project\node_modules\redis\index.js:362:23)
...
Any ideas on what to change?
I'm going to assume your "windows box" is outside of AWS.
For Elasticache you can't access it from outside AWS. See the Security Section here : https://aws.amazon.com/elasticache/faqs/#Can_I_access_Amazon_ElastiCache_from_outside_AWS
The most common use case is to have EC2 instances within a VPC access and consume the Elasticache service. Along with this the Elasticache Redis service doesn't employ authentication and only allows lock down via security groups.
If you need something that differentiates from this configuration then you should look at putting Redis on EC2 so that you have full control.

Retrieve application config from secure location during task start

I want to make sure I'm not storing sensitive keys and credentials in source or in docker images. Specifically I'd like to store my MySQL RDS application credentials and copy them when the container/task starts. The documentation provides an example of retrieving the ecs.config file from s3 and I'd like to do something similar.
I'm using the Amazon ECS optimized AMI with an auto scaling group that registers with my ECS cluster. I'm using the ghost docker image without any customization. Is there a way to configure what I'm trying to do?
You can define a volume on the host and map it to the container with Read only privileges.
Please refer to the following documentation for configuring ECS volume for an ECS task.
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
Even though the container does not have the config at build time, it will read the configs as if they are available in its own file system.
There are many ways to secure the config on the host OS.
In my past projects, I have achieved the same by disabling ssh into the host and injecting the config at boot-up using cloud-init.