Why does local user login to vsftpd not work? - apache

I want to install vsftpd on ubuntu 14.04 server and access the files through an apache httpd.
Following this guide thats my vsftpd.conf:
# Example config file /etc/vsftpd.conf
#
# The default compiled in settings are fairly paranoid. This sample file
# loosens things up a bit, to make the ftp daemon more usable.
# Please see vsftpd.conf.5 for all compiled in defaults.
#
# READ THIS: This example file is NOT an exhaustive list of vsftpd options.
# Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's
# capabilities.
#
#
# Run standalone? vsftpd can run either from an inetd or as a standalone
# daemon started from an initscript.
listen=YES
#
# Run standalone with IPv6?
# Like the listen parameter, except vsftpd will listen on an IPv6 socket
# instead of an IPv4 one. This parameter and the listen parameter are mutually
# exclusive.
#listen_ipv6=YES
#
# Allow anonymous FTP? (Disabled by default)
anonymous_enable=YES
#
# Uncomment this to allow local users to log in.
local_enable=YES
#
# Uncomment this to enable any form of FTP write command.
write_enable=YES
#
# Default umask for local users is 077. You may wish to change this to 022,
# if your users expect that (022 is used by most other ftpd's)
#local_umask=022
#
# Uncomment this to allow the anonymous FTP user to upload files. This only
# has an effect if the above global write enable is activated. Also, you will
# obviously need to create a directory writable by the FTP user.
#anon_upload_enable=YES
#
# Uncomment this if you want the anonymous FTP user to be able to create
# new directories.
#anon_mkdir_write_enable=YES
#
# Activate directory messages - messages given to remote users when they
# go into a certain directory.
dirmessage_enable=YES
#
# If enabled, vsftpd will display directory listings with the time
# in your local time zone. The default is to display GMT. The
# times returned by the MDTM FTP command are also affected by this
# option.
use_localtime=YES
#
# Activate logging of uploads/downloads.
xferlog_enable=YES
#
# Make sure PORT transfer connections originate from port 20 (ftp-data).
connect_from_port_20=YES
#
# If you want, you can arrange for uploaded anonymous files to be owned by
# a different user. Note! Using "root" for uploaded files is not
# recommended!
#chown_uploads=YES
#chown_username=whoever
#
# You may override where the log file goes if you like. The default is shown
# below.
xferlog_file=/var/log/vsftpd.log
#
# If you want, you can have your log file in standard ftpd xferlog format.
# Note that the default log file location is /var/log/xferlog in this case.
#xferlog_std_format=YES
#
# You may change the default value for timing out an idle session.
#idle_session_timeout=600
#
# You may change the default value for timing out a data connection.
#data_connection_timeout=120
#
# It is recommended that you define on your system a unique user which the
# ftp server can use as a totally isolated and unprivileged user.
#nopriv_user=ftpsecure
#
# Enable this and the server will recognise asynchronous ABOR requests. Not
# recommended for security (the code is non-trivial). Not enabling it,
# however, may confuse older FTP clients.
#async_abor_enable=YES
#
# By default the server will pretend to allow ASCII mode but in fact ignore
# the request. Turn on the below options to have the server actually do ASCII
# mangling on files when in ASCII mode.
# Beware that on some FTP servers, ASCII support allows a denial of service
# attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd
# predicted this attack and has always been safe, reporting the size of the
# raw file.
# ASCII mangling is a horrible feature of the protocol.
#ascii_upload_enable=YES
#ascii_download_enable=YES
#
# You may fully customise the login banner string:
#ftpd_banner=Welcome to blah FTP service.
#
# You may specify a file of disallowed anonymous e-mail addresses. Apparently
# useful for combatting certain DoS attacks.
#deny_email_enable=YES
# (default follows)
#banned_email_file=/etc/vsftpd.banned_emails
#
# You may restrict local users to their home directories. See the FAQ for
# the possible risks in this before using chroot_local_user or
# chroot_list_enable below.
chroot_local_user=YES
#
# You may specify an explicit list of local users to chroot() to their home
# directory. If chroot_local_user is YES, then this list becomes a list of
# users to NOT chroot().
# (Warning! chroot'ing can be very dangerous. If using chroot, make sure that
# the user does not have write access to the top level directory within the
# chroot)
chroot_local_user=YES
#chroot_list_enable=YES
# (default follows)
#chroot_list_file=/etc/vsftpd.chroot_list
#
# You may activate the "-R" option to the builtin ls. This is disabled by
# default to avoid remote users being able to cause excessive I/O on large
# sites. However, some broken FTP clients such as "ncftp" and "mirror" assume
# the presence of the "-R" option, so there is a strong case for enabling it.
#ls_recurse_enable=YES
#
# Customization
#
# Some of vsftpd's settings don't fit the filesystem layout by
# default.
#
# This option should be the name of a directory which is empty. Also, the
# directory should not be writable by the ftp user. This directory is used
# as a secure chroot() jail at times vsftpd does not require filesystem
# access.
secure_chroot_dir=/var/run/vsftpd/empty
#
# This string is the name of the PAM service vsftpd will use.
pam_service_name=vsftpd
#
# This option specifies the location of the RSA certificate to use for SSL
# encrypted connections.
rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
# This option specifies the location of the RSA key to use for SSL
# encrypted connections.
rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
userlist_deny=NO
userlist_enable=YES
userlist_file=/etc/vsftpd.user_list
vsftpd.user_list contains the newly created user ftpuser.
Anonymous login works if I comment the three userlist configs on the bottom of the config (because no anonymous user in vsftpd.user_list) but logging in as ftpuser doesn't work: FTP Error 530 invalid login
I cant find any issue and found exactly this config 100 times on the internet, every working well.
I also tried to reinstall vsftpd + apache from scratch, did not help.
Added vsftpd logfile:
Thu Aug 27 17:56:27 2015 [pid 15875] CONNECT: Client "95.223.27.113"
Thu Aug 27 17:56:27 2015 [pid 15875] FTP response: Client "95.223.27.113", "220 (vsFTPd 3.0.2)"
Thu Aug 27 17:56:27 2015 [pid 15875] FTP command: Client "95.223.27.113", "AUTH TLS"
Thu Aug 27 17:56:27 2015 [pid 15875] FTP response: Client "95.223.27.113", "530 Please login with USER and PASS."
Thu Aug 27 17:56:27 2015 [pid 15875] FTP command: Client "95.223.27.113", "AUTH SSL"
Thu Aug 27 17:56:27 2015 [pid 15875] FTP response: Client "95.223.27.113", "530 Please login with USER and PASS."
Thu Aug 27 17:56:28 2015 [pid 15875] FTP command: Client "95.223.27.113", "USER ftpuser"
Thu Aug 27 17:56:28 2015 [pid 15875] [ftpuser] FTP response: Client "95.223.27.113", "331 Please specify the password."
Thu Aug 27 17:56:28 2015 [pid 15875] [ftpuser] FTP command: Client "95.223.27.113", "PASS <password>"
Thu Aug 27 17:56:30 2015 [pid 15874] [ftpuser] FAIL LOGIN: Client "95.223.27.113"
Thu Aug 27 17:56:31 2015 [pid 15875] [ftpuser] FTP response: Client "95.223.27.113", "530 Login incorrect."

In my case, I had the same error (530) because my ftp user was assigned a /usr/sbin/nologin shell, but that shell was not added in /etc/shells.
It is usually recommended to assign a "non-shell" to the ftp users who need ftp-only access, via usermod -s /usr/sbin/nologin ftpuser

Related

kafka - ssl handshake failing

i've setup SSL on my local Kafka instance, and when i start the Kafka console producer/consumer on SSL port, it is giving SSL Handshake error
Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
>[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
Here are the changes made :
create the truststore & keystore
Here is output of the openssl command to check the SSL connectivity :
Karans-MacBook-Pro:keystore karanalang$ openssl s_client -debug -connect localhost:9093 -tls1
CONNECTED(00000005)
write to 0x13d7bdf90 [0x13e01ea03] (118 bytes => 118 (0x76))
0000 - 16 03 01 00 71 01 00 00-6d 03 01 81 e8 00 cd c4 ....q...m.......
0010 - 04 4b 64 86 3e 30 97 32-c3 66 3a 8c ed 05 bf 97 .Kd.>0.2.f:.....
0020 - ff d5 b2 a4 26 fe 99 c0-7f 94 a1 00 00 2e c0 14 ....&...........
0030 - c0 0a 00 39 ff 85 00 88-00 81 00 35 00 84 c0 13 ...9.......5....
---
0076 - <SPACES/NULS>
read from 0x13d7bdf90 [0x13e01a803] (5 bytes => 5 (0x5))
0005 - <SPACES/NULS>
4307385836:error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number:/System/Volumes/Data/SWE/macOS/BuildRoots/e90674e518/Library/Caches/com.apple.xbs/Sources/libressl/libressl-56.60.2/libressl-2.8/ssl/ssl_pkt.c:386:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Start Time: 1636579015
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
Here is the server.properties :
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
# SSL CHANGE
listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
# SSL CHANGE
advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
ssl.client.auth=none
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000
##################### Confluent Metrics Reporter #######################
# Confluent Control Center and Confluent Auto Data Balancer integration
#
# Uncomment the following lines to publish monitoring data for
# Confluent Control Center and Confluent Auto Data Balancer
# If you are using a dedicated metrics cluster, also adjust the settings
# to point to your metrics kakfa cluster.
#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
#confluent.metrics.reporter.bootstrap.servers=localhost:9092
#
# Uncomment the following line if the metrics cluster has a single broker
#confluent.metrics.reporter.topic.replicas=1
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
############################# Confluent Authorizer Settings #############################
# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
# Semi-colon separated list of super users in the format <principalType>:<principalName>
#super.users=
# Specify a valid Confluent license. By default free-tier license will be used
#confluent.license=
# Replication factor for the topic used for licensing. Default is 3.
confluent.license.topic.replication.factor=1
# Uncomment the following lines and specify values where required to enable CONFLUENT provider for RBAC and centralized ACLs
# Enable CONFLUENT provider
#confluent.authorizer.access.rule.providers=ZK_ACL,CONFLUENT
# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
# Replication factor for the metadata topic used for authorization. Default is 3.
confluent.metadata.topic.replication.factor=1
# Replication factor for the topic used for audit logs. Default is 3.
confluent.security.event.logger.exporter.kafka.topic.replicas=1
# Listeners for metadata server
#confluent.metadata.server.listeners=http://0.0.0.0:8090
# Advertised listeners for metadata server
#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
############################# Confluent Data Balancer Settings #############################
# The Confluent Data Balancer is used to measure the load across the Kafka cluster and move data
# around as necessary. Comment out this line to disable the Data Balancer.
confluent.balancer.enable=true
# By default, the Data Balancer will only move data when an empty broker (one with no partitions on it)
# is added to the cluster or a broker failure is detected. Comment out this line to allow the Data
# Balancer to balance load across the cluster whenever an imbalance is detected.
#confluent.balancer.heal.uneven.load.trigger=ANY_UNEVEN_LOAD
# The default time to declare a broker permanently failed is 1 hour (3600000 ms).
# Uncomment this line to turn off broker failure detection, or adjust the threshold
# to change the duration before a broker is declared failed.
#confluent.balancer.heal.broker.failure.threshold.ms=-1
# Edit and uncomment the following line to limit the network bandwidth used by data balancing operations.
# This value is in bytes/sec/broker. The default is 10MB/sec.
#confluent.balancer.throttle.bytes.per.second=10485760
# Capacity Limits -- when set to positive values, the Data Balancer will attempt to keep
# resource usage per-broker below these limits.
# Edit and uncomment this line to limit the maximum number of replicas per broker. Default is unlimited.
#confluent.balancer.max.replicas=10000
# Edit and uncomment this line to limit what fraction of the log disk (0-1.0) is used before rebalancing.
# The default (below) is 85% of the log disk.
#confluent.balancer.disk.max.load=0.85
# Edit and uncomment these lines to define a maximum network capacity per broker, in bytes per
# second. The Data Balancer will attempt to ensure that brokers are using less than this amount
# of network bandwidth when rebalancing.
# Here, 10MB/s. The default is unlimited capacity.
#confluent.balancer.network.in.max.bytes.per.second=10485760
#confluent.balancer.network.out.max.bytes.per.second=10485760
# Edit and uncomment this line to identify specific topics that should not be moved by the data balancer.
# Removal operations always move topics regardless of this setting.
#confluent.balancer.exclude.topic.names=
# Edit and uncomment this line to identify topic prefixes that should not be moved by the data balancer.
# (For example, a "confluent.balancer" prefix will match all of "confluent.balancer.a", "confluent.balancer.b",
# "confluent.balancer.c", and so on.)
# Removal operations always move topics regardless of this setting.
#confluent.balancer.exclude.topic.prefixes=
# The replication factor for the topics the Data Balancer uses to store internal state.
# For anything other than development testing, a value greater than 1 is recommended to ensure availability.
# The default value is 3.
confluent.balancer.topic.replication.factor=1
################################## Confluent Telemetry Settings ##################################
# To start using Telemetry, first generate a Confluent Cloud API key/secret. This can be done with
# instructions at https://docs.confluent.io/current/cloud/using/api-keys.html. Note that you should
# be using the '--resource cloud' flag.
#
# After generating an API key/secret, to enable Telemetry uncomment the lines below and paste
# in your API key/secret.
#
#confluent.telemetry.enabled=true
#confluent.telemetry.api.key=<CLOUD_API_KEY>
#confluent.telemetry.api.secret=<CCLOUD_API_SECRET>
############ SSL #################
ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
ssl.truststore.password=test123
ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
ssl.keystore.password=test123
ssl.key.password=test123
# confluent.metrics.reporter.bootstrap.servers=localhost:9093
# confluent.metrics.reporter.security.protocol=SSL
# confluent.metrics.reporter.ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
# confluent.metrics.reporter.ssl.truststore.password=test123
# confluent.metrics.reporter.ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
# confluent.metrics.reporter.ssl.keystore.password=test123
# confluent.metrics.reporter.ssl.key.password=test123
client-ssl.properties:
bootstrap.servers=localhost:9093
security.protocol=SSL
ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
ssl.truststore.password=test123
ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
ssl.keystore.password=test123
ssl.key.password=test123
Commands to start the Console Producer/Consumer :
$CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
$CONFLUENT_HOME/bin/kafka-console-consumer --bootstrap-server localhost:9093 --topic karantest --consumer.config $CONFLUENT_HOME/props/client-ssl.properties --from-beginning
Any ideas on how to resolve this ?
Update :
This is the error when i try to debug (using - export KAFKA_OPTS=-Djavax.net.debug=all)
javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
javax.net.ssl|ERROR|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.108 PST|TransportContext.java:341|Fatal (CERTIFICATE_UNKNOWN): No name matching localhost found (
"throwable" : {
java.security.cert.CertificateException: No name matching localhost found
at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:234)
at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:429)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1335)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1232)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1175)
at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443)
at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074)
at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008)
at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:509)
at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:601)
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:447)
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:332)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:229)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:563)
at org.apache.kafka.common.network.Selector.poll(Selector.java:499)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:639)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:327)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242)
at java.base/java.lang.Thread.run(Thread.java:829)}
Adding the following in client-ssl.properties resolved the issue:
ssl.endpoint.identification.algorithm=
This setting means the certificate does not match the hostname of the machine you are using to run the consumer. That seems to be recommended approach in this case.
Related thread:
Kafka java consumer SSL handshake Error : java.security.cert.CertificateException: No subject alternative names present
Try to set identification algorithm for for producer and consumer also.
ssl.endpoint.identification.algorithm=
producer.ssl.endpoint.identification.algorithm=
consumer.ssl.endpoint.identification.algorithm=
Check if you have connection or problems.
You can test access with:
openssl s_client -debug -connect servername:port -tls1_2
Answer must be: "Verify return code: 0 (ok)
In other case you have no access.

When I enable SSL on apache2 ubuntu server both http and https does not work

I am trying to enable SSL for my webserver. However, when I enable ssl, http stops working and https does not start working. I have followed the following guide:
https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-apache-for-ubuntu-14-04
There is no firewall activated on the server.
This is the default-ssl.conf file:
<IfModule mod_ssl.c>
<VirtualHost _default_:443>
ServerAdmin admin#MyWebSit.com
ServerName MyWebSite.com
ServerAlias www.MyWebSite.com
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
# SSL Engine Switch:
# Enable/Disable SSL for this virtual host.
SSLEngine on
# A self-signed (snakeoil) certificate can be created by installing
# the ssl-cert package. See
# /usr/share/doc/apache2/README.Debian.gz for more info.
# If both key and certificate are stored in the same file, only the
# SSLCertificateFile directive is needed.
SSLCertificateFile /etc/apache2/ssl/MyWebSite_com.crt
SSLCertificateKeyFile /etc/apache2/ssl/MyWebSite_com.key
# Server Certificate Chain:
# Point SSLCertificateChainFile at a file containing the
# concatenation of PEM encoded CA certificates which form the
# certificate chain for the server certificate. Alternatively
# the referenced file can be the same as SSLCertificateFile
# when the CA certificates are directly appended to the server
# certificate for convinience.
#SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt
# Certificate Authority (CA):
# Set the CA certificate verification path where to find CA
# certificates for client authentication or alternatively one
# huge file containing all of them (file must be PEM encoded)
# Note: Inside SSLCACertificatePath you need hash symlinks
# to point to the certificate files. Use the provided
# Makefile to update the hash symlinks after changes.
#SSLCACertificatePath /etc/ssl/certs/
#SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt
# Certificate Revocation Lists (CRL):
# Set the CA revocation path where to find CA CRLs for client
# authentication or alternatively one huge file containing all
# of them (file must be PEM encoded)
# Note: Inside SSLCARevocationPath you need hash symlinks
# to point to the certificate files. Use the provided
# Makefile to update the hash symlinks after changes.
#SSLCARevocationPath /etc/apache2/ssl.crl/
#SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl
# Client Authentication (Type):
# Client certificate verification type and depth. Types are
# none, optional, require and optional_no_ca. Depth is a
# number which specifies how deeply to verify the certificate
# issuer chain before deciding the certificate is not valid.
#SSLVerifyClient require
#SSLVerifyDepth 10
# SSL Engine Options:
# Set various options for the SSL engine.
# o FakeBasicAuth:
# Translate the client X.509 into a Basic Authorisation. This means that
# the standard Auth/DBMAuth methods can be used for access control. The
# user name is the `one line' version of the client's X.509 certificate.
# Note that no password is obtained from the user. Every entry in the user
# file needs this password: `xxj31ZMTZzkVA'.
# o ExportCertData:
# This exports two additional environment variables: SSL_CLIENT_CERT and
# SSL_SERVER_CERT. These contain the PEM-encoded certificates of the
# server (always existing) and the client (only existing when client
# authentication is used). This can be used to import the certificates
# into CGI scripts.
# o StdEnvVars:
# This exports the standard SSL/TLS related `SSL_*' environment variables.
# Per default this exportation is switched off for performance reasons,
# because the extraction step is an expensive operation and is usually
# useless for serving static content. So one usually enables the
# exportation for CGI and SSI requests only.
# o OptRenegotiate:
# This enables optimized SSL connection renegotiation handling when SSL
# directives are used in per-directory context.
#SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
<Directory /usr/lib/cgi-bin>
SSLOptions +StdEnvVars
</Directory>
# SSL Protocol Adjustments:
# The safe and default but still SSL/TLS standard compliant shutdown
# approach is that mod_ssl sends the close notify alert but doesn't wait for
# the close notify alert from client. When you need a different shutdown
# approach you can use one of the following variables:
# o ssl-unclean-shutdown:
# This forces an unclean shutdown when the connection is closed, i.e. no
# SSL close notify alert is send or allowed to received. This violates
# the SSL/TLS standard but is needed for some brain-dead browsers. Use
# this when you receive I/O errors because of the standard approach where
# mod_ssl sends the close notify alert.
# o ssl-accurate-shutdown:
# This forces an accurate shutdown when the connection is closed, i.e. a
# SSL close notify alert is send and mod_ssl waits for the close notify
# alert of the client. This is 100% SSL/TLS standard compliant, but in
# practice often causes hanging connections with brain-dead browsers. Use
# this only for browsers where you know that their SSL implementation
# works correctly.
# Notice: Most problems of broken clients are also related to the HTTP
# keep-alive facility, so you usually additionally want to disable
# keep-alive for those clients, too. Use variable "nokeepalive" for this.
# Similarly, one has to force some clients to use HTTP/1.0 to workaround
# their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and
# "force-response-1.0" for this.
# BrowserMatch "MSIE [2-6]" \
# nokeepalive ssl-unclean-shutdown \
# downgrade-1.0 force-response-1.0
</VirtualHost>
</IfModule>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
apache -S gives me:
AH00111: Config variable ${APACHE_RUN_DIR} is not defined
apache2: Syntax error on line 80 of /etc/apache2/apache2.conf: DefaultRuntimeDir must be a valid directory, absolute or relative to ServerRoot
and apachectl -S gives me:
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 192.168.178.24. Set the 'ServerName' directive globally to suppress this message
VirtualHost configuration:
*:80 192.168.178.24 (/etc/apache2/sites-enabled/000-default.conf:1)
*:443 MyWebSite.com (/etc/apache2/sites-enabled/default-ssl.conf:2)
ServerRoot: "/etc/apache2"
Main DocumentRoot: "/var/www/html"
Main ErrorLog: "/var/log/apache2/error.log"
Mutex watchdog-callback: using_defaults
Mutex ssl-stapling-refresh: using_defaults
Mutex ssl-stapling: using_defaults
Mutex ssl-cache: using_defaults
Mutex default: dir="/var/run/apache2/" mechanism=default
Mutex mpm-accept: using_defaults
PidFile: "/var/run/apache2/apache2.pid"
Define: DUMP_VHOSTS
Define: DUMP_RUN_CFG
User: name="www-data" id=33
Group: name="www-data" id=33
disabling ssl immediately gets http back up. (after a restart of Apache)
Unfortunately, I no longer know what I can try to do.
any assistance here would be greatly appreciated!
Thank you in advance!
EDIT:
As it is clear the information I've provided does not fully explain my issue, I am adding additional details here:
sudo service apache2 restart
Gives the following result:
Warning: The unit file, source configuration file or drop-ins of
apache2.service changed on disk. Run 'systemctl daemon-reload' to
reload units. Job for apache2.service failed because the control
process exited with error code. See "systemctl status apache2.service"
and "journalctl -xe" for details
systemctl daemon-reload
Runs successfully, but I still get the Job failed response when running the restart command again. Below is the response for "systemctl status apache2.service"
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Mon 2019-12-02 11:08:57 CET; 3h 28min ago
Process: 4557 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Main PID: 1413 (code=exited, status=0/SUCCESS)
Dec 02 11:08:57 ubuntu systemd[1]: Starting The Apache HTTP Server...
Dec 02 11:08:57 ubuntu apachectl[4557]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 192.168.178.24. Set the 'ServerNa
Dec 02 11:08:57 ubuntu apachectl[4557]: Action 'start' failed.
Dec 02 11:08:57 ubuntu apachectl[4557]: The Apache error log may have more information.
Dec 02 11:08:57 ubuntu systemd[1]: apache2.service: Control process exited, code=exited status=1
Dec 02 11:08:57 ubuntu systemd[1]: apache2.service: Failed with result 'exit-code'.
Dec 02 11:08:57 ubuntu systemd[1]: Failed to start The Apache HTTP Server.
And below is the result for journalctl -xe
--
-- Unit motd-news.service has begun starting up.
Dec 02 13:56:00 ubuntu 50-motd-news[5122]: * Overheard at KubeCon: "microk8s.status just blew my mind".
Dec 02 13:56:00 ubuntu 50-motd-news[5122]: https://microk8s.io/docs/commands#microk8s.status
Dec 02 13:56:00 ubuntu systemd[1]: Started Message of the Day.
-- Subject: Unit motd-news.service has finished start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit motd-news.service has finished starting up.
--
-- The start-up result is RESULT.
Dec 02 14:09:02 ubuntu CRON[5169]: pam_unix(cron:session): session opened for user root by (uid=0)
Dec 02 14:09:02 ubuntu CRON[5170]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
Dec 02 14:09:02 ubuntu CRON[5169]: pam_unix(cron:session): session closed for user root
Dec 02 14:09:44 ubuntu systemd[1]: Starting Clean php session files...
-- Subject: Unit phpsessionclean.service has begun start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit phpsessionclean.service has begun starting up.
Dec 02 14:09:44 ubuntu sessionclean[5171]: PHP Warning: PHP Startup: Unable to load dynamic library 'mysqli' (tried: /usr/lib/php/20170718/mysqli (/usr/lib/php/2017071
Dec 02 14:09:44 ubuntu systemd[1]: Started Clean php session files.
-- Subject: Unit phpsessionclean.service has finished start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit phpsessionclean.service has finished starting up.
--
-- The start-up result is RESULT.
Dec 02 14:17:01 ubuntu CRON[5220]: pam_unix(cron:session): session opened for user root by (uid=0)
Dec 02 14:17:01 ubuntu CRON[5221]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Dec 02 14:17:01 ubuntu CRON[5220]: pam_unix(cron:session): session closed for user root
Dec 02 14:18:00 ubuntu systemd-timesyncd[1097]: Network configuration changed, trying to establish connection.
Dec 02 14:18:00 ubuntu systemd-timesyncd[1097]: Synchronized to time server 91.189.89.198:123 (ntp.ubuntu.com).
Dec 02 14:39:01 ubuntu CRON[5241]: pam_unix(cron:session): session opened for user root by (uid=0)
Dec 02 14:39:01 ubuntu CRON[5242]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
Dec 02 14:39:01 ubuntu CRON[5241]: pam_unix(cron:session): session closed for user root
Dec 02 14:39:44 ubuntu systemd[1]: Starting Clean php session files...
-- Subject: Unit phpsessionclean.service has begun start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit phpsessionclean.service has begun starting up.
Dec 02 14:39:44 ubuntu sessionclean[5244]: PHP Warning: PHP Startup: Unable to load dynamic library 'mysqli' (tried: /usr/lib/php/20170718/mysqli (/usr/lib/php/2017071
Dec 02 14:39:44 ubuntu systemd[1]: Started Clean php session files.
-- Subject: Unit phpsessionclean.service has finished start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit phpsessionclean.service has finished starting up.
--
-- The start-up result is RESULT.
Dec 02 14:47:59 ubuntu systemd-timesyncd[1097]: Network configuration changed, trying to establish connection.
Dec 02 14:47:59 ubuntu systemd-timesyncd[1097]: Synchronized to time server 91.189.89.198:123 (ntp.ubuntu.com).
After a lot of searching I found out that it was due to my key being corrupt.
I was able to determine this by checking the apache error log:
sudo nano /var/log/apache2/error.log
[Mon Dec 02 11:08:57.784521 2019] [ssl:error] [pid 4560] AH02579: Init: Private key not found
[Mon Dec 02 11:08:57.784840 2019] [ssl:error] [pid 4560] SSL Library Error: error:0D0680A8:asn1 encoding routines:asn1_check_tlen:wrong tag
[Mon Dec 02 11:08:57.784922 2019] [ssl:error] [pid 4560] SSL Library Error: error:0D08303A:asn1 encoding routines:asn1_template_noexp_d2i:nested asn1 error
[Mon Dec 02 11:08:57.784990 2019] [ssl:error] [pid 4560] SSL Library Error: error:0D0680A8:asn1 encoding routines:asn1_check_tlen:wrong tag
[Mon Dec 02 11:08:57.785061 2019] [ssl:error] [pid 4560] SSL Library Error: error:0D07803A:asn1 encoding routines:asn1_item_embed_d2i:nested asn1 error (Type=RSAPrivat$
[Mon Dec 02 11:08:57.785135 2019] [ssl:error] [pid 4560] SSL Library Error: error:04093004:rsa routines:old_rsa_priv_decode:RSA lib
[Mon Dec 02 11:08:57.785200 2019] [ssl:error] [pid 4560] SSL Library Error: error:0D0680A8:asn1 encoding routines:asn1_check_tlen:wrong tag
[Mon Dec 02 11:08:57.785269 2019] [ssl:error] [pid 4560] SSL Library Error: error:0D07803A:asn1 encoding routines:asn1_item_embed_d2i:nested asn1 error (Type=PKCS8_PRI$
[Mon Dec 02 11:08:57.785434 2019] [ssl:emerg] [pid 4560] AH02311: Fatal error initialising mod_ssl, exiting. See /var/log/apache2/error.log for more information
[Mon Dec 02 11:08:57.785469 2019] [ssl:emerg] [pid 4560] AH02564: Failed to configure encrypted (?) private key MyWebSite.com:443:0, check /etc/apache2/ssl/MyWebSite$
AH00016: Configuration Failed
As displayed, "Private key not found" was not referring to the path of the key, but rather the key being corrupt. I checked this by opening the key with:
sudo nano MyWebSite.key
If the key is correct, it will have the text
----- BEGIN PRIVATE KEY -----
at the top of the key. The solution was then to regenerate the certificate request, have the certificate re-issued and install the new certificate. I hope this helps you if you're in the same situation as I was.

Monit on apache server : why monit log " 'apache' error -- unknown resource ID: [5]" on /var/log/monit

We are monitoring several servers with Monit. We are using the version 5.25.1.
Some are dedicated apache servers. The monitoring is ok.
But the log of monit (/var/log/monit) is like this :
[CET Mar 18 03:12:03] info : Starting Monit 5.25.1 daemon with http interface at [0.0.0.0]:3353
[CET Mar 18 03:12:03] info : Monit start delay set to 180s
[CET Mar 18 03:15:03] info : 'xxxxx.localhost' Monit 5.25.1 started
[CET Mar 18 03:15:03] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:16:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:17:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:18:08] error : 'apache' error -- unknown resource ID: [5]
The configuration file /etc/monit.conf is like this :
###############################################################################
## Monit control file
###############################################################################
###############################################################################
## Global section
###############################################################################
##
## Start Monit in the background (run as a daemon):
# check services at 2-minute intervals
# with start delay 240 # optional: delay the first check by 4-minutes (by
# # default Monit check immediately after Monit start)
set daemon 60
with start delay 180
### Set the location of the Monit id file which stores the unique id for the
### Monit instance. The id is generated and stored on first Monit start. By
### default the file is placed in $HOME/.monit.id.
#
set idfile /var/.monit.id
## Set the list of mail servers for alert delivery. Multiple servers may be
## specified using a comma separator. By default Monit uses port 25 - it is
## possible to override this with the PORT option.
#
# set mailserver mail.bar.baz, # primary mailserver
# backup.bar.baz port 10025, # backup mailserver on port 10025
# localhost # fallback relay
set mailserver localhost
## By default Monit will drop alert events if no mail servers are available.
## If you want to keep the alerts for later delivery retry, you can use the
## EVENTQUEUE statement. The base directory where undelivered alerts will be
## stored is specified by the BASEDIR option. You can limit the maximal queue
## size using the SLOTS option (if omitted, the queue is limited by space
## available in the back end filesystem).
#
set eventqueue
basedir /var/monit # set the base directory where events will be stored
slots 100 # optionally limit the queue size
## Send status and events to M/Monit (for more informations about M/Monit
## see http://mmonit.com/).
#
# set mmonit http://monit:monit#192.168.1.10:8080/collector
#
#
## Monit by default uses the following alert mail format:
##
##
## You can override this message format or parts of it, such as subject
## or sender using the MAIL-FORMAT statement. Macros such as $DATE, etc.
## are expanded at runtime. For example, to override the sender, use:
#
# set mail-format { from: monit#foo.bar }
#
#
## You can set alert recipients whom will receive alerts if/when a
## service defined in this file has errors. Alerts may be restricted on
## events by using a filter as in the second example below.
#
set alert fake#mail.com not on { instance }
# receive all alerts
# set alert manager#foo.bar only on { timeout } # receive just service-
# # timeout alert
#
mail-format {
from: xxxxxxx#monit.localhost
subject: $SERVICE => $EVENT
message:
DESCRIPTION : $DESCRIPTION
ACTION : $ACTION
DATE : $DATE
HOST : $HOST
Sorry for the spam.
Monit
}
## Monit has an embedded web server which can be used to view status of
## services monitored and manage services from a web interface. See the
## Monit Wiki if you want to enable SSL for the web server.
#
set httpd port 3353 and
use address 0.0.0.0
allow yyyyy:zzzz
###############################################################################
## SeSTART rvices
###############################################################################
##
## Check general system resources such as load average, cpu and memory
## usage. Each test specifies a resource, conditions and the action to be
## performed should a test fail.
#
check system xxxxxx.localhost
if loadavg (1min) > 8 for 5 cycles then alert
if loadavg (5min) > 4 for 5 cycles then alert
if memory usage > 75% for 5 cycles then alert
if cpu usage (user) > 70% for 5 cycles then alert
if cpu usage (system) > 50% for 5 cycles then alert
if cpu usage (wait) > 50% for 5 cycles then alert
check process apache with pidfile /var/run/httpd/httpd.pid
group www
start program = "/etc/init.d/httpd start" with timeout 60 seconds
stop program = "/etc/init.d/httpd stop"
if failed host localhost port 80 then restart
if cpu > 60% for 2 cycles then alert
if cpu > 80% for 5 cycles then restart
if loadavg(5min) greater than 10 for 8 cycles then restart
if 3 restarts within 5 cycles then timeout
###############################################################################
## Includes
###############################################################################
##
## It is possible to include additional configuration parts from other files or
## directories.
#
# include /etc/monit.d/*
#
#
# Include all files from /etc/monit.d/
include /etc/monit.d/*
On ui monit, everything is ok. and the monitoring is 100% useful. We can stop, restart the service like we want.
So I don't understand the sentence 'error : 'apache' error -- unknown resource ID: [5]' we found on the log of monit.
Anyone has an idea about it ?
Thanks for your help.
I had the same problem..
mmonit said that loadavg is for "check system" only. it used to work for apache but not anymore..
"The loadavg statement can be used in "check system" context only (load average is system property, not process'). Please remove the following statement and reload monit"
so disable this line by adding # on the first of:
# if loadavg(5min) greater than 10 for 8 cycles then restart
then restart monit
service monit restart
You will no longer receive the appache error.

Not able to start/access Klov Server at http://localhost for extent reports - Error starting ApplicationContext

I'm new to klov reports and have downloaded the klov jar from http://extentreports.com/community/0 and tried running klov-server (klov-0.2.0.jar) following the instruction at https://github.com/anshooarora/klov (java -jar klov-0.2.0.jar), however I'm getting below error and not able to start the klov server (http://localhost:portNo)
enter image description here
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2018-11-13 09:09:08.298 ERROR 40212 --- [ main] o.s.boot.SpringApplication : Application run failed
[Refer - Error Screenshot here]
Mongodb 3.2 running and listening on port 27017
klov application.properties file reside in the same folder as klov-0.2.0.jar
Have tried different ports for Klov (80, 90, 2571,1337), but all giving the same error as in description
Running it on windows 10, with application.properties settings as below:
# klov
application.name=Klov
server.host=localhost
server.port=90
# data.mongodb
spring.data.mongodb.host=localhost
spring.data.mongodb.port=27017
spring.data.mongodb.database=klov
# data.rest
spring.data.rest.basePath=/rest
spring.data.rest.default-page-size=6
# redis, session
#use.redis.session.store=false
#spring.redis.host=localhost
#spring.redis.port=6379
#spring.redis.ssl=false
#spring.redis.database=0
#spring.session.store-type=redis
#server.session.timeout=-1
# users
server.admin.name=klovadmin
server.admin.key=$2a$10$I/5TFi6BrHChUghTZEZfCO82txzu8L5brcK0CxhS3m.V6glfj2vZe
# storage
file.storage.location=./upload/reports/
# schedulers
scheduler.jobs.enabled=false
# automatically delete older builds
# default is -1 (keep all)
# this count must be greater than 0 for this scheduler to work
# scheduled to run daily at 12:00AM
scheduler.job.builds.retain.count=-1
# mail
spring.mail.host=
spring.mail.port=
spring.mail.username=
spring.mail.password=
spring.mail.properties.mail.smtp.ssl.enable=true
#spring.mail.properties.mail.smtp.starttls.enable=true
#spring.mail.properties.mail.smtp.starttls.required=true
spring.mail.properties.mail.smtp.auth=true
#spring.mail.properties.mail.smtp.connectiontimeout=5000
#spring.mail.properties.mail.smtp.timeout=5000
#spring.mail.properties.mail.smtp.writetimeout=5000
spring.mail.test-connection=true
Since you are not using a mail server, start with the following properties:
# klov
application.name=Klov
server.host=localhost
server.port=80
# data.mongodb
spring.data.mongodb.host=localhost
spring.data.mongodb.port=27017
spring.data.mongodb.database=klov
# data.rest
spring.data.rest.basePath=/rest
spring.data.rest.default-page-size=6
# redis, session
use.redis.session.store=false
spring.redis.host=localhost
spring.redis.port=6379
spring.redis.ssl=false
spring.redis.database=0
spring.session.store-type=redis
server.session.timeout=-1
# users
server.admin.name=klovadmin
server.admin.key=$2a$10$I/5TFi6BrHChUghTZEZfCO82txzu8L5brcK0CxhS3m.V6glfj2vZe
# storage
file.storage.location=./upload/reports/
# schedulers
scheduler.jobs.enabled=false
# automatically delete older builds
# default is -1 (keep all)
# this count must be greater than 0 for this scheduler to work
# scheduled to run daily at 12:00AM
scheduler.job.builds.retain.count=-1
# mail
#spring.mail.host=
#spring.mail.port=
#spring.mail.username=
#spring.mail.password=
#spring.mail.properties.mail.smtp.ssl.enable=true
#spring.mail.properties.mail.smtp.starttls.enable=true
#spring.mail.properties.mail.smtp.starttls.required=true
#spring.mail.properties.mail.smtp.auth=true
#spring.mail.properties.mail.smtp.connectiontimeout=5000
#spring.mail.properties.mail.smtp.timeout=5000
#spring.mail.properties.mail.smtp.writetimeout=5000
spring.mail.test-connection=false

I'm having trouble authenticating over AD to windows machines from my ansible host. 'Server not found in Kerberos Database' on Ubuntu 16.10

I'm having trouble authenticating over AD to windows machines from my ansible host. I have a valid kerberos ticket -
klist
Credentials cache: FILE:/tmp/krb5cc_1000
Principal: ansible#SOMEDOMAIN.LOCAL
Issued Expires Principal
Mar 10 09:15:27 2017 Mar 10 19:15:24 2017 krbtgt/SOMEDOMAIN.LOCAL#SOMEDOMAIN.LOCAL
My kerberos config looks fine to me -
cat /etc/krb5.conf
[libdefaults]
default_realm = SOMEDOMAIN.LOCAL
# dns_lookup_realm = true
# dns_lookup_kdc = true
# ticket_lifetime = 24h
# renew_lifetime = 7d
# forwardable = true
# The following krb5.conf variables are only for MIT Kerberos.
# kdc_timesync = 1
# forwardable = true
# proxiable = true
# The following encryption type specification will be used by MIT Kerberos
# if uncommented. In general, the defaults in the MIT Kerberos code are
# correct and overriding these specifications only serves to disable new
# encryption types as they are added, creating interoperability problems.
#
# Thie only time when you might need to uncomment these lines and change
# the enctypes is if you have local software that will break on ticket
# caches containing ticket encryption types it doesn't know about (such as
# old versions of Sun Java).
# default_tgs_enctypes = des3-hmac-sha1
# default_tkt_enctypes = des3-hmac-sha1
# permitted_enctypes = des3-hmac-sha1
# The following libdefaults parameters are only for Heimdal Kerberos.
# v4_instance_resolve = false
# v4_name_convert = {
# host = {
# rcmd = host
# ftp = ftp
# }
# plain = {
# something = something-else
# }
# }
# fcc-mit-ticketflags = true
[realms]
SOMEDOMAIN.LOCAL = {
kdc = prosperitydc1.somedomain.local
kdc = prosperitydc2.somedomain.local
default_domain = somedomain.local
admin_server = somedomain.local
}
[domain_realm]
.somedomain.local = SOMEDOMAIN.LOCAL
somedomain.local = SOMEDOMAIN.LOCAL
When running a test command - ansible windows -m win_ping -vvvvv I get
'Server not found in Kerberos database'.
ansible windows -m win_ping -vvvvv
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/windows/win_ping.ps1
<kerberostest.somedomain.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#SOMEDOMAIN.LOCAL on PORT 5986 TO kerberostest.somedomain.local
<kerberostest.somedomain.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.somedomain.local:5986/wsman
<kerberostest.somedomain.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ansible/plugins/connection/winrm.py", line 154, in _winrm_connect
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 132, in open_shell
res = self.send_message(xmltodict.unparse(req))
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 207, in send_message
return self.transport.send_message(message)
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/transport.py", line 181, in send_message
prepared_request = self.session.prepare_request(request)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/sessions.py", line 407, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 306, in prepare
self.prepare_auth(auth, url)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 543, in prepare_auth
r = auth(self)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 308, in __call__
auth_header = self.generate_request_header(None, host, is_preemptive=True)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 148, in generate_request_header
raise KerberosExchangeError("%s failed: %s" % (kerb_stage, str(error.args)))
KerberosExchangeError: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
kerberostest.somedomain.local | UNREACHABLE! => {
"changed": false,
"msg": "kerberos: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))",
"unreachable": true
}
I am able to ssh to the target machine
ssh -v1 kerberostest.somedomain.local -p 5986
OpenSSH_7.3p1 Ubuntu-1, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to kerberostest.somedomain.local [10.10.20.84] port 5986.
debug1: Connection established.
I can also ping all hosts with their hostname. I'm at a loss :(
Here is the ansible host file-
sudo cat /etc/ansible/hosts
# This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
#
# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or ip addresses
# - A hostname/ip can be a member of multiple groups
# Ex 1: Ungrouped hosts, specify before any group headers.
## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group
## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110
# If you have multiple hosts following a pattern you can specify
# them like this:
## www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group
## [dbservers]
##
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57
# Here's another example of host ranges, this time there are no
# leading 0s:
## db-[99:101]-node.example.com
[monitoring-servers]
#nagios
10.10.20.75 ansible_connection=ssh ansible_user=nagios
[windows]
#fileserver.somedomain.local#this machine isnt joined to the domain yet.
kerberostest.SOMEDOMAIN.LOCAL
[windows:vars]
#the following works for windows local account authentication
#ansible_ssh_user = prosperity
#ansible_ssh_pass = *********
#ansible_connection = winrm
#ansible_ssh_port = 5986
#ansible_winrm_server_cert_validation = ignore
#vars needed to authenticate on the windows domain using kerberos
ansible_user = ansible#SOMEDOMAIN.LOCAL
ansible_connection = winrm
ansible_winrm_scheme = https
ansible_winrm_transport = kerberos
ansible_winrm_server_cert_validation = ignore
I also tried connecting to the domain with realmd with success, but running the ansible command produced the same result.
This looks like a case of a missing SPN.
Here's the relevant error snippet:
<kerberostest.prosperityerp.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#PROSPERITYERP.LOCAL on PORT 5986 TO kerberostest.prosperityerp.local
<kerberostest.prosperityerp.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.prosperityerp.local:5986/wsman
<kerberostest.prosperityerp.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
And that is based off something I noticed in your Ansible configuration file:
[windows]
#fileserver.prosperityerp.local#this machine isnt joined to the domain yet.
kerberostest.PROSPERITYERP.LOCAL
I think the this machine isnt joined to the domain yet line in that file is a good indicator that the SPN HTTP/kerberostest.prosperityerp.local does not exist in Active Directory which would be causing the "server not found" message. You can SSH to kerberostest.prosperityerp.local, probably because it exists in DNS or in a Hosts file of the client machine, but unless and until the SPN HTTP/kerberostest.prosperityerp.local is created in Active Directory you will continue to get that error message. Adding that SPN properly in at this point would be a whole other topic of discussion.
You could use a command like this to test if you have that SPN defined:
setspn -Q HTTP/kerberostest.prosperityerp.local
SPNs exists to represent to a Kerberos client where to find the service instance for that service on the network.
Also run:
nslookup kerberostest.prosperityerp.local
on at least two client machines to make sure the FQDN of the IP host where the Kerberized is running exists DNS. DNS is a requirement for Kerberos to properly run in a network.
Finally, you could use Wireshark on the client for further analysis, use the filter kerberos to highlight only kerberos traffic.
In my case, the Server not found in Kerberos database error was a result of the target Windows machine's DNS name not being mapped to the right realm, as hinted at in this line from this Microsoft Technet Article:
The error “Server not found in Kerberos database” is common and can be misleading because it often appears when the service principal is not missing. The error can be caused by domain/realm mapping problems or it can be the result of a DNS problem where the service principal name is not being built correctly. Server logs and network traces can be used to determine what service principal is actually being requested.
I had playbook whoami.yaml:
- hosts: windows-machine.mydomain.com
tasks:
- name: Run 'whoami' command
win_command: whoami
Hosts file:
[windows]
windows-machine.mydomain.com
[windows:vars]
ansible_connection=winrm
ansible_winrm_transport=kerberos
ansible_user=user#FOO.BAR.MYDOMAIN.COM
ansible_password=<password>
ansible_port=5985
Since the DNS name was windows-machine.mydomain.com, but the AD realm was FOO.BAR.MYDOMAIN.COM I had to fix the mapping in my /etc/krb5.conf file on my Ansible host:
INCORRECT
This won't work for our case since this mapping rule won't apply to windows-machine.mydomain.com:
[domain_realm]
foo.bar.mydomain.com = FOO.BAR.MYDOMAIN.COM
CORRECT
This will correctly map windows-machine.mydomain.com to realm FOO.BAR.MYDOMAIN.COM
[domain_realm]
.mydomain.com = FOO.BAR.MYDOMAIN.COM