TLS communication between ignite pods - ssl

I build ignite cluster in my kubernetes VM. I want to configure my ignite pods to work with tls without certificate validation.
I created a key store manually in each pod which are binary files how can I add them to be created as part of the chart deploy?
should I create the file before and add to configmap? or run a shell command during build to create them?
can you please assist I'm new to kubernetes and SSL/TLS

You need to configure your node to use appropriate ssl/tls settings per this guide: https://ignite.apache.org/docs/latest/security/ssl-tls
docs for using a configmap to create an ignite configuration file for a node: https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-configmap-for-node-configuration-file
You could set up the ssl/tls files as configmaps, or alternatively, use a stateful set and create a persistentvolume to hold the files. https://kubernetes.io/docs/concepts/storage/persistent-volumes/
See the tab on https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-pod-configuration for instructions on how to mount persistent volumes for an Ignite Stateful set.

Related

How to check the if config command name is changed in AWS Elasticache(REDIS)

I am trying to access AWS elasticache(REDIS). I followed this instruction:
https://redsmin.uservoice.com/knowledgebase/articles/734646-amazon-elasticache-and-redsmin
Redis is connected now but when I click on configuration. I got this error:
"Redsmin can't load the configuration. Check with your provider that you have access to the configuration command."
edit 1:
config Redis command is sadly not available on AWS Elasticache, see their documentation:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/RestrictedCommands.html
To deliver a managed service experience, ElastiCache restricts access to certain cache engine-specific commands that require advanced privileges. For cache clusters running Redis, the following commands are unavailable:
[...]
config
That's why Redsmin configuration module (it's the only module impacted) cannot display current your Redis AWS Elasticache configuration.

Kafka on Kubernetes with SSL

I have a Kafka cluster that is running on K8S. I am using the confluent kafka image as and I have an EXTERNAL listeners that is working.
How can I add SSL encryption? Should I use an ingress? Where can I find good documentation?
Thank you
You have a manual way in this gist, which does not use the confluent image.
But for Confluent and its Helm chart (see "Confluent Operator: Getting Started with Apache Kafka and Kubernetes" from Rohit Bakhshi), you can follow:
"Encryption, authentication and external access for Confluent Kafka on Kubernetes" from Ryan Morris
Out of the box, the helm chart doesn’t support SSL configurations for encryption and authentication, or exposing the platform for access from outside the Kubernetes cluster.
To implement these requirements, there are a few modifications to the installation needed.
In summary, they are:
Generate some private keys/certificates for brokers and clients
Create Kubernetes Secrets to provide them within your cluster
Update the broker StatefulSet with your Secrets and SSL configuration
Expose each broker pod via an external service
I recommend using Strimzi kafka operator to deploy Kafka to Kubernetes. I'm using it in production for a year now.
It supports SSL, external load balancers, kafka exporter, etc
Strimzi Kafka Operator

Why does Spinnaker need to list namespaces when the namespaces are already known to it?

Kubernetes namespaces are configured before Spinnaker is even deployed, so Spinnaker should be able to deploy into them in a namespace-restricted enterprise environment. But this answer says Spinnaker will not run in that setting: Spinnaker with restricted namspace access
Why does Spinnaker require read access to namespaces when those names are already known to it?
Why does the error response contain the name of the namespace that it is trying to list?
I forked halyard so that it uses client.pods().list() to verify the k8 connection and it is able to deploy Spinnaker. Spinnaker seems to work as long as it takes namespace names from its cache. When it uses live-manifest-calls or refreshes its cache, namespace pulldowns stop working.
You don't need it actulally. Just proper configuration for Halyard and Spinnaker.
See instruction.
Configure Spinnaker to install in Kubernetes
Important: This will by default limit your Spinnaker to deploying to the namespace specified. If you want to be able to deploy to other namespaces, either add a second cloud provider target or remove the --namespaces flag.
Use the Halyard hal command line tool to configure Halyard to install Spinnaker in your Kubernetes cluster
hal config deploy edit \
--type distributed \
--account-name ${ACCOUNT_NAME} \
--location ${NAMESPACE}

Retrieve application config from secure location during task start

I want to make sure I'm not storing sensitive keys and credentials in source or in docker images. Specifically I'd like to store my MySQL RDS application credentials and copy them when the container/task starts. The documentation provides an example of retrieving the ecs.config file from s3 and I'd like to do something similar.
I'm using the Amazon ECS optimized AMI with an auto scaling group that registers with my ECS cluster. I'm using the ghost docker image without any customization. Is there a way to configure what I'm trying to do?
You can define a volume on the host and map it to the container with Read only privileges.
Please refer to the following documentation for configuring ECS volume for an ECS task.
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
Even though the container does not have the config at build time, it will read the configs as if they are available in its own file system.
There are many ways to secure the config on the host OS.
In my past projects, I have achieved the same by disabling ssh into the host and injecting the config at boot-up using cloud-init.

Creating AMQ network of broker clusters on JBoss Fuse 6.2, without fabric

I want to create (2) broker clusters connected by network of brokers in JBoss Fuse 6.2; each cluster has 2 master/slave pairs.
It's a small cluster, so we don't intend to use Fabric/Zookeeper; everything will be statically configured, no auto discovery.
Questions
Is it possible to use fabric profiles to build the topology, but
avoid using fabric at runtime?
Can we use Git, or something similar, for centrally managing container config files, again, without fabric?
We tried creating profiles using fabric:mq-create, but the command is not available unless a fabric is first created, which defeats the purpose.
No fabric profiles requires using fabric. You can use git to store files, but you cannot have JBoss Fuse automatic use it such as it does with fabric. You would need to use git manually.
The AMQ broker in JBoss Fuse is just standard Apache ActiveMQ so you can configure it manually/static as a network of brokers. It just not very easy to do if you haven't done that before.
See the JBoss A-MQ documentation as that covers the broker: http://www.jboss.org/products/amq/overview/
for example at: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.2/html/Using_Networks_of_Brokers/index.html