how to change cassandra docker config - authentication

I installed cassandra from cassandra hub and its running successfully.
root#localhost$ docker ps | grep cassandra
2925664e3391 cassandra:2.1.14 "/docker-entrypoin..." 5 months ago Up 23 minutes 0.0.0.0:7000-7001->7000-7001/tcp, 0.0.0.0:7199->7199/tcp, 0.0.0.0:9042->9042/tcp, 0.0.0.0:9160->9160/tcp, 0.0.0.0:32779->7000/tcp, 0.0.0.0:32778->7001/tcp, 0.0.0.0:32777->7199/tcp, 0.0.0.0:32776->9042/tcp, 0.0.0.0:32775->9160/tcp
I am connected my application with this cassandra. I need to use password authentication to connect to cassandra form my application.
I have to unable password authentication for this, I get the /etc/cassandra/cassandra.yaml file in docker image. I have to follow Authentication Config to enable this.
Is there way to override this changes with docker start or docker run command ?

It is not included into the piece generating the cassandra.yml file, so no. You can submit a PR modifying the relevant piece of the generation script to allow to specify auth via env variables.

Related

error Performing erlang migration to another node

I have 2 nodes rabbit2 and rabbit3 everything is working fine until i start cluster
then I did the command
scp -r rabbit2:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie .
and after successfully transferring the failed nodes
enter image description here
Maybe the cookie file not used because the file permission setting or other cookie file has been used for priority reason.
Do you know how to run rabbitmq in erlang console mode?
If you can, enter the console first, check the problem by command.
erlang cookie check function

azdata erroring out due to config file error

All,
The admin setup a 3 node AKS cluster today. I got the kube/config file update by running the az command
az aks get-credentials --name AKSBDCClus --resource-group AAAA-Dev-RG --subscription AAAA-Subscription.
I was able to run all the kubectl commands fine but when I tried setting up the SQLServer 2019 BDC by running azdata bdc create it gave me an error Failed to complete kube config setup.
Since it was something to do with azdata and kubectl I checked the azdata logs and this is what I see in the azdata.log.
Loading default kube config from C:\Users\rgn\.kube\config
Invalid kube-config file. Expected all values in kube-config/contexts list to have 'name' key
Thinking the config file could have got corrupted I tried running az aks get-credentials --name AKSBDCClus --resource-group AAAA-Dev-RG --subscription AAAA-Subscription.
This time I got whole lot of error
The client 'rgn#mycompany.com' with object id 'XXXXX-28c3-YYYY-ZZZZ-AQAQAQd'
does not have authorization to perform action 'Microsoft.ContainerService/managedClusters/listClusterUserCredential/action'
over scope '/subscriptions/Subscription-ID/resourceGroups/
ResourceGroup-Dev-RG/providers/Microsoft.ContainerService/managedClusters/AKSCluster' or the scope is invalid. If access was recently granted, please refresh your credentials.
I logged out and logged back into azure and retried but got the same errors as above. I was able to even stop the VM Scale before I logged for the day. Everything works fine but I'm unable to run azdata script.
Can someone point me in the right direction.
Thanks,
rgn
Turns out that the config file was bad. I deleted the file and ran "az aks get-credentials" (after the necessary permissions to run it) and it worked. The size of old config is 19kb but the new one is 10k.
I guess, I might have messed it up while testing "az aks get-credentials"

Issue to use CLI commands to manage RabbitMQ Message broker on Openshift

My image of rabbitmq in openshift is deployed and works perfectly fine.
I manage this image in the Openshift web tool.
However, when I want to use some administration CLI tools like rabbitmqctl to manage the node ( https://www.rabbitmq.com/rabbitmqctl.8.html ), I get the following error :
‘Only root or rabbitmq should run rabbitmqctl’
I tried :
To add the permission to modify the rabbitmq server files to the root group, which is not permitted :
$ chgrp -R 0 /var/lib/rabbitmq chgrp: changing group of '/var/lib/rabbitmq': Operation not permitted
To connect as root in the container, but I cant.
The rabbitmq-plugins command actually works, I can enable the different plugins with the CLI tool.
Any Idea?
Update - I found a solution here :
https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#openshift-specific-guidelines
I need to modify my dockerfile, as described in the link.

Redis NOAUTH Authentication required. [tcp://127.0.0.1:6379] Laravel

I am working on a Laravel project that uses dingo package to manage some APIs. I changed CACHE_DRIVER=array variable in my .env file to CACHE_DRIVER=redis because dingo no longer support array for CACHE_DRIVER. I therefore installed redis on my system and included the package in my Laravel project by adding "predis/predis": "~1.0" in my composer.json and updating with the command composer update. Up until now everything works just fine. However, to create database table and seed them using php artisan migrate --seed, I get the error:
[Predis\Connection\ConnectionException]
SELECT failed: NOAUTH Authentication required. [tcp://127.0.0.1:6379]
Note: when I was installing redis, I added a password. I also authenticated using the two commands redis-cli to switch to redis and then AUTH mypassword. Yet when I try to seed, it still throws the same error. Please what am I doing wrong?
Thanks for any help.
I would start with setting redis password in REDIS_PASSWORD environment variable (e.g. in .env file). See https://laravel.com/docs/5.3/redis#configuration for more details about redis configuration in Laravel.
Make sure you are running your redis server when you are seeding.
Run: redis-server

Google cloud dataproc failing to create new cluster with initialization scripts

I am using the below command to create data proc cluster:
gcloud dataproc clusters create informetis-dev
--initialization-actions “gs://dataproc-initialization-actions/jupyter/jupyter.sh,gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh,gs://dataproc-initialization-actions/hue/hue.sh,gs://dataproc-initialization-actions/ipython-notebook/ipython.sh,gs://dataproc-initialization-actions/tez/tez.sh,gs://dataproc-initialization-actions/oozie/oozie.sh,gs://dataproc-initialization-actions/zeppelin/zeppelin.sh,gs://dataproc-initialization-actions/user-environment/user-environment.sh,gs://dataproc-initialization-actions/list-consistency-cache/shared-list-consistency-cache.sh,gs://dataproc-initialization-actions/kafka/kafka.sh,gs://dataproc-initialization-actions/ganglia/ganglia.sh,gs://dataproc-initialization-actions/flink/flink.sh”
--image-version 1.1 --master-boot-disk-size 100GB --master-machine-type n1-standard-1 --metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
--num-preemptible-workers 2 --num-workers 2 --preemptible-worker-boot-disk-size 1TB --properties hive:hive.metastore.warehouse.dir=gs://informetis-dev/hive-warehouse
--worker-machine-type n1-standard-2 --zone asia-east1-b --bucket info-dev
But Dataproc failed to create cluster with following errors in failure file:
cat
+ mysql -u hive -phive-password -e '' ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (111)
+ mysql -e 'CREATE USER '\''hive'\'' IDENTIFIED BY '\''hive-password'\'';' ERROR 2003 (HY000): Can't connect to MySQL
server on 'localhost' (111)
Does anyone have any idea behind this failure ?
It looks like you're missing the --scopes sql-admin flag as described in the initialization action's documentation, which will prevent the CloudSQL proxy from being able to authorize its tunnel into your CloudSQL instance.
Additionally, aside from just the scopes, you need to make sure the default Compute Engine service account has the right project-level permissions in whichever project holds your CloudSQL instance. Normally the default service account is a project editor in the GCE project, so that should be sufficient when combined with the sql-admin scopes to access a CloudSQL instance in the same project, but if you're accessing a CloudSQL instance in a separate project, you'll also have to add that service account as a project editor in the project which owns the CloudSQL instance.
You can find the email address of your default compute service account under the IAM page for your project deploying Dataproc clusters, with the name "Compute Engine default service account"; it should look something like <number>#project.gserviceaccount.com`.
I am assuming that you already created the Cloud SQL instance with something like this, correct?
gcloud sql instances create g-test-1022 \
--tier db-n1-standard-1 \
--activation-policy=ALWAYS
If so, then it looks like the error is in how the argument for the metadata is formatted. You have this:
--metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
Unfortuinately, the zone looks to be incomplete (asia-east1 instead of asia-east1-b).
Additionally, with running that many initializayion actions, you'll want to provide a pretty generous initialization action timeout so the cluster does not assume something has failed while your actions take awhile to install. You can do that by specifying:
--initialization-action-timeout 30m
That will allow the cluster to give the initialization actions 30 minutes to bootstrap.
By the time you reported, it was detected an issue with cloud sql proxy initialization action. It is most probably that such issue affected you.
Nowadays, it shouldn't be an issue.