I have installed K8S cluster via Rancher and it is up and running except:
Looking into the log, it says:
W0414 09:19:00.301391 9 flags.go:221] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
W0414 09:19:00.305182 9 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0414 09:19:00.305528 9 main.go:183] Creating API client for https://10.43.0.1:443
I0414 09:19:00.313466 9 main.go:227] Running in Kubernetes cluster version v1.17 (v1.17.4) - git (clean) commit 8d8aa39598534325ad77120c120a22b3a990b5ea - platform linux/amd64
I0414 09:19:00.318605 9 main.go:91] Validated ingress-nginx/default-http-backend as the default backend.
I0414 09:19:00.604027 9 main.go:102] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
I0414 09:19:00.657480 9 nginx.go:274] Starting NGINX Ingress controller
I0414 09:19:00.692928 9 event.go:258] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"5d7200c0-88cc-4b9e-97a3-e28e2a529970", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I0414 09:19:00.696624 9 event.go:258] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"8d6491ff-b617-4b01-b318-d4afd66cdb1f", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0414 09:19:00.696864 9 event.go:258] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"4ded14e4-6669-49de-89e6-f15007751014", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0414 09:19:01.858339 9 nginx.go:318] Starting NGINX process
I0414 09:19:01.858724 9 leaderelection.go:235] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
I0414 09:19:01.859332 9 controller.go:133] Configuration changes detected, backend reload required.
I0414 09:19:01.862863 9 status.go:86] new leader elected: nginx-ingress-controller-28k5x
E0414 09:19:01.912314 9 controller.go:145] Unexpected failure reloading the backend:
-------------------------------------------------------------------------------
Error: exit status 1
nginx: the configuration file /tmp/nginx-cfg115117149 syntax is ok
2020/04/14 09:19:01 [emerg] 41#41: bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: configuration file /tmp/nginx-cfg115117149 test failed
What am I doing wrong?
Related
Enabled SSL on cassandra nodes on port 9142. The service is running fine when testing it from local but getting AllNodesFailedException when deploying on ECS cluster. Using the same keystore locally. Non SSL Port 9042 is working ok.
Failed to instantiate [com.datastax.oss.driver.api.core.CqlSession]:
Factory method 'session' threw exception; nested exception is
com.datastax.oss.driver.api.core.AllNodesFailedException: Could not
reach any contact point, make sure you've provided valid addresses
(showing first 3 nodes, use getAllErrors() for more):
Node(endPoint=ip-10-18-28-203.us-west-2.compute.internal/10.18.28.203:9142,
hostId=null, hashCode=6551c917):
[io.netty.channel.ConnectTimeoutException: connection timed out:
ip-10-18-28-203.us-west-2.compute.internal/10.18.28.203:9142],
Node(endPoint=ip-10-18-8-110.us-west-2.compute.internal/10.18.8.110:9142,
hostId=null, hashCode=36985f57):
[io.netty.channel.ConnectTimeoutException: connection timed out:
ip-10-18-8-110.us-west-2.compute.internal/10.18.8.110:9142],
Node(endPoint=ip-10-18-7-47.us-west-2.compute.internal/10.18.7.47:9142,
hostId=null, hashCode=8eab7e9):
[io.netty.channel.ConnectTimeoutException: connection timed out:
ip-10-18-7-47.us-west-2.compute.internal/10.18.7.47:9142]
cassandra.yaml properties
server_encryption_options:
internode_encryption: none
keystore: /etc/cassandra/conf/casskeystore
keystore_password: changeit
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: true
optional: true
keystore: /etc/cassandra/conf/casskeystore
keystore_password: changeit
I have a Apache Kafka (v. 2.13-3.0.0) installed on a remote Ubuntu server.
I follow this tutorial to secure my cluster:
https://medium.com/egen/securing-kafka-cluster-using-sasl-acl-and-ssl-dec15b439f9d
but when I try to start Kafka with jaas conf file with the commands:
export KAFKA_OPTS=-Djava.security.auth.login.config=<kafka-binary-
dir>/config/kafka_server_jaas.conf
./bin/kafka-server-start.sh ./config/server.properties
I receive the error:
[2021-11-12 10:30:47,864] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-11-12 10:30:48,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-11-12 10:30:48,099] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.ClassNotFoundException: kafka.security.auth.SimpleAclAuthorizer
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:417)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
These are the SSL config in server.properties file:
########### SECURITY using SCRAM-SHA-512 and SSL
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# Broker security settings
ssl.truststore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/truststore/kafka.truststore.jks
ssl.truststore.password=giuseppe
ssl.keystore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/keystore/kafka.keystore.jks
ssl.keystore.password=giuseppe
ssl.key.password=giuseppe
# ACLs
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin
#zookeeper SASL
zookeeper.set.acl=false
########### SECURITY using SCRAM-SHA-512 and SSL
If I try to comment the 2 rows of ACL I receive the error:
[2021-11-12 11:05:29,301] INFO [ThrottledChannelReaper-
ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-11-12 11:05:29,331] ERROR [KafkaServer id=0] Fatal error
during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on
file .lock in /tmp/kafka-logs. A Kafka instance in another process
or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:37)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:112)
at kafka.log.LogManager$.apply(LogManager.scala:1283)
at kafka.server.KafkaServer.startup(KafkaServer.scala:254)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
What is the cause? May it be a wrong configuration?
Thanks.
Update:
Changing the row in:
# ACLs authorizer.class.name=org.apache.kafka.server.authorizer.Authorizer
there is this error: org.apache.kafka.common.KafkaException: Could not find
a public no-argument constructor for
org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
I receive this new error:
[2021-11-12 16:51:57,613] ERROR Exiting Kafka due to fatal exception
(kafka.Kafka$)
org.apache.kafka.common.KafkaException: Could not find a public no-argument
constructor for org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.NoSuchMethodException:
org.apache.kafka.server.authorizer.Authorizer.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3508)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2711)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:390)
... 7 more
It just seems that if you change the
kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
It should work; it worked for me.
Kafka 3.0 removed SimpleAclAuthorizer
Pull request - https://github.com/apache/kafka/commit/976e78e405d57943b989ac487b7f49119b0f4af4#diff-e0ccf1b5c964d2c303b6a69a8b8b67df5a6bfbae8aa514f580d353c4c6bf8e36
The blog seems to be using version 2.2.0.
Facing a strange issue!
On listing apache modules on minion by executing following command from salt-master,
# salt 'target' apache.modules
on minion, salt-minion service enters "reloading" state.
# systemctl status salt-minion
● salt-minion.service - The Salt Minion
Loaded: loaded (/usr/lib/systemd/system/salt-minion.service; enabled; vendor preset: disabled)
Active: reloading (reload) since Wed 2021-09-29 00:48:29 PDT; 2 weeks 2 days ago
Docs: man:salt-minion(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltstack.com/en/latest/contents.html
Main PID: 3582 (salt-minion)
Status: "Reading configuration..."
Few details:
Important to note that previously it was Apache/2.4.6 (CentOS) and this issue started after upgrading to Apache/2.4.48 (IUS)
salt-master and salt-minion both the versions are 3003.1
It is Centos 7 VM
service remains in "reloading" state until restarted
minion logs does not log anything on change of service state
I have an Phoenix/Elixir App that works fine with https locally, however when I try to change it to use the production certificates the server does not respond and no error messages are shown.
In my dev.exs this was made with the hostname localhost
In prod.exs here are the keys. These were made with my production URL
I have tried to change the localhost to production url on local by adding host into the https portion in the config
https: [port: 443,
host: "produrl.com"
keyfile: "priv/keys/domain.key",
certfile: "priv/keys/domain.crt"],
This throws an error
sudo MIX_ENV=prod mix phoenix.server
[info] Running LiteChartBe.Endpoint with Cowboy using http://localhost:80
[info] Application lite_chart_be exited: LiteChartBe.start(:normal, []) returned an error: shutdown: failed to start child: LiteChartBe.Endpoint
** (EXIT) shutdown: failed to start child: Phoenix.Endpoint.Server
** (EXIT) shutdown: failed to start child: {:ranch_listener_sup, LiteChartBe.Endpoint.HTTPS}
** (EXIT) shutdown: failed to start child: :ranch_acceptors_sup
** (EXIT) :badarg
{"Kernel pid terminated",application_controller,"{application_start_failure,lite_chart_be,{{shutdown,{failed_to_start_child,'Elixir.LiteChartBe.Endpoint',{shutdown,{failed_to_start_child,'Elixir.Phoenix.Endpoint.Server',{shutdown,{failed_to_start_child,{ranch_listener_sup,'Elixir.LiteChartBe.Endpoint.HTTPS'},{shutdown,{failed_to_start_child,ranch_acceptors_sup,badarg}}}}}}}},{'Elixir.LiteChartBe',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,lite_chart_be,{{shutdown,{failed_to_start_child,'Elixir.LiteChartBe.Endpoint',{shutdown,{failed_to_start_child,'Elixir.Phoeni
If I simply forward localhost to produrl in my local hosts file, no errors are thrown and nothing connects to the server using https.
The error states that you provided a wrong argument for the configuration of your Endpoint (** (EXIT) :badarg). I suppose that is beacause you are missing a comma behind your host url.
This does probably not solve your problem, but that is supposedly the reason for the error message shown after your change.
I want to use nexus 3 as a private docker registry. All was ok until I tried to connect Spinnaker to this registry, and when the spinnaker is trying to connect to a registry with credentials I see this error:
2017-02-06T12:22:54.867681925Z 2017-02-06 12:22:54.867 ERROR 1 --- [0.0-7002-exec-8] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.netflix.spinnaker.clouddriver.docker.registry.api.v2.exception.DockerRegistryAuthenticationException: Failed to parse www-authenticate header: Docker registry must support 'Bearer' authentication.] with root cause
2017-02-06T12:22:54.867710980Z
2017-02-06T12:22:54.867721497Z com.netflix.spinnaker.clouddriver.docker.registry.api.v2.exception.DockerRegistryAuthenticationException: Failed to parse www-authenticate header: Docker registry must support 'Bearer' authentication.
Does anybody know about this issue?
My clouddriver config:
dockerRegistry:
enabled: true
accounts:
- name: docker
address: http://nexus.infrastructure:5000
username: admin
password: password
email: admin#test.ru
repositories: