SaltStack Tomcat Deployment: 'tomcat.war_deployed' error - tomcat8

I'm new to SaltStack and I'm working on a build to deploy tomcat and tomcat war files to Ubuntu 16.04 systems. I haven't ran into any issues until my first attempt at deploying a war file with tomcat.war_deployed. If anyone with more experience with SaltStack that could provide me any feedback I'd greatly appreciate it.
/srv/pillar/top.sls
base:
'*':
- tomcat-manager
/srv/pillar/tomcat-manager.sls
tomcat-manager:
user: 'myuser'
passwd: 'mypassword'
Output of salt '*' pillar.test
tomcat-manager:
------------
passwd:
mypassword
user:
myuser
mystate.sls
# Install tomcat8 packages.
install_tomcat:
pkg.installed:
- pkgs:
- tomcat8
- tomcat8-admin
# Install postgresql packages.
install_postgresql:
pkg.installed:
- name: postgresql-9.5
# Start tomcat service.
start_service_tomcat:
service.running:
- name: tomcat8
- enable: True
- require:
- pkg: install_tomcat
- watch:
- file: sync tomcat-users.xml
# Tomcat deploy war files.
deploy_war:
tomcat.war_deployed:
- name: /mywar
- war: salt://files/tomcat/war/mywar.war
- require:
- service: start_service_tomcat
# Start postgresql service.
start_service_postgresql:
service.running:
- name: postgresql
- enable: True
- require:
- pkg: install_postgresql
- watch:
- file: sync pg_hba.conf
- file: sync postgresql.conf
Output of salt '*' state.sls mystate
----------
ID: deploy_war
Function: tomcat.war_deployed
Name: /mywar
Result: False
Comment: F
a
i
l
e
d
t
o
c
r
e
a
t
e
H
T
T
P
r
e
q
u
e
s
t
Started: 15:54:02.314254
Duration: 1980.229 ms
Changes:
[...]
Failed: 1
-------------
Total states run: 12
Total run time: 2.671 s
ERROR: Minions returned with non-zero exit code
Updates
myminion:8080/manager is accessible on my minion(s).
I haven't been able to find if SaltStack officially supports Tomcat8 so I decided to test this with Tomcat7 and it's giving me the same issue.
When I run salt '*' tomcat.version on the minions:
myminion:
Apache Tomcat/7.0.68 (Ubuntu)
myminion2:
Apache Tomcat/8.0.32 (Ubuntu)
Output of salt '*' tomcat.status:
myminion:
False
myminion1:
False
Output of salt '*' tomcat.serverinfo:
myminion:
----------
error:
Failed to create HTTP request
myminion1:
----------
error:
Failed to create HTTP request
I haven't had any luck with search engines for Failed to create HTTP request yet.
Output of sudo salt-call -l debug tomcat.serverinfo:
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: myminion
[DEBUG ] Configuration file path: /etc/salt/minion
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[WARNING ] Unable to find IPv6 record for "myminion" causing a 10 second timeout when rendering grains. Set the dns or /etc/hosts for IPv6 to clear this.
[DEBUG ] Please install 'virt-what' to improve results of the 'virtual' grain.
[DEBUG ] Connecting to master. Attempt 1 of 1
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506')
[DEBUG ] Generated random reconnect delay between '1000ms' and '11000ms' (7330)
[DEBUG ] Setting zmq_reconnect_ivl to '7330ms'
[DEBUG ] Setting zmq_reconnect_ivl_max to '11000ms'
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506', 'clear')
[DEBUG ] Decrypting the current master AES key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] SaltEvent PUB socket URI: /var/run/salt/minion/minion_event_bbef5074cf_pub.ipc
[DEBUG ] SaltEvent PULL socket URI: /var/run/salt/minion/minion_event_bbef5074cf_pull.ipc
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/minion/minion_event_bbef5074cf_pull.ipc
[DEBUG ] Sending event: tag = salt/auth/creds; data = {'_stamp': '2017-05-04T18:14:04.328838', 'creds': {'publish_port': 4505, 'aes': '######/#####/###############################################=', 'master_uri': 'tcp://mymaster:4506'}, 'key': ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506')}
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] Determining pillar cache
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506', 'aes')
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506')
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] LazyLoaded jinja.render
[DEBUG ] LazyLoaded yaml.render
[DEBUG ] LazyLoaded tomcat.serverinfo
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506', 'aes')
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506')
[DEBUG ] LazyLoaded nested.output
local:
----------
error:
Failed to create HTTP request
Output of salt-call test.versions:
[WARNING ] Unable to find IPv6 record for "dt-rhettvm-01" causing a 10 second timeout when rendering grains. Set the dns or /etc/hosts for IPv6 to clear this.
local:
Salt Version:
Salt: 2016.11.4
Dependency Versions:
cffi: Not Installed
cherrypy: Not Installed
dateutil: 2.4.2
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.8
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.3
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.12 (default, Nov 19 2016, 06:48:10)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.2.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: Ubuntu 16.04 xenial
machine: x86_64
release: 4.4.0-62-generic
system: Linux
version: Ubuntu 16.04 xenia

What worked for me was restarting the tomcat instance on the minion.
The tomcat instance could be running with the old user parameters.

Related

Trivy on EKS unable to scan any images

I am trying to scan all images deployed on my EKS cluster I am setting up for high security (will be deployed to classified IL5 environment). Kubernetes v1.23, all worker nodes run on Bottlerocket OS.
I expect images to be scanned and available in the VulnerabilityReports CRD.
I was able to successfully install Falco to the cluster (uses containerd). However, when deploying the official Helm chart (0.6.0-rc3) the scan-vulnerability containers start and then immediately error out. I set this environment variable on the trivy-operator deployment:
- name: CONTAINER_RUNTIME_ENDPOINT
value: /run/containerd/containerd.sock
Output of run with -debug:
{
"level": "error",
"ts": 1668286646.865245,
"logger": "reconciler.vulnerabilityreport",
"msg": "Scan job container",
"job": "trivy-system/scan-vulnerabilityreport-74f54b6cd",
"container": "discovery",
"status.reason": "Error",
"status.message": "2022-11-12T20:57:13.674Z\t\u001b[31mFATAL\u001b[0m\timage scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:\n\t* unable to inspect the image (023620263533.dkr.ecr.us-gov-east-1.amazonaws.com/docker.io/istio/pilot:1.15.2): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\t* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory\n\t* containerd socket not found: /run/containerd/containerd.sock\n\t* GET https://023620263533.dkr.ecr.us-gov-east-1.amazonaws.com/v2/docker.io/istio/pilot/manifests/1.15.2: unexpected status code 401 Unauthorized: Not Authorized\n\n\n\n",
"stacktrace": "github.com/aquasecurity/trivy-operator/pkg/vulnerabilityreport.(*WorkloadController).processFailedScanJob\n\t/home/runner/work/trivy-operator/trivy-operator/pkg/vulnerabilityreport/controller.go:551\ngithub.com/aquasecurity/trivy-operator/pkg/vulnerabilityreport.(*WorkloadController).reconcileJobs.func1\n\t/home/runner/work/trivy-operator/trivy-operator/pkg/vulnerabilityreport/controller.go:376\nsigs.k8s.io/controller-runtime/pkg/reconcile.Func.Reconcile\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/reconcile/reconcile.go:102\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:121\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:320\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime#v0.13.1/pkg/internal/controller/controller.go:234"
}
I confirmed that bottlerocket uses containerd, as /run/containerd/containerd.sock is specified on my Falco deployment. Even when I mount this as volume onto the pod and set the CONTAINER_RUNTIME_ENDPOINT to this path I get the same error.
Edit
I added the following security context:
seLinuxOptions:
user: system_u
role: system_r
type: control_t
level: s0-s0:c0.c1023
Initially I mounted the dockershim.sock from the host to the pod, then realized that was not necessary, the error messages were a little misleading, it was really an authentication with ECR issue. Furthermore, the seLinux flags needed to be specified at the pod level, and not the container level.

Filebeat does not complete on close_eof + --once

Using filebeat 7.5.2:
I'm using a filebeat configuration with close_eof enabled and I run filebeat with the flag --once. I can see the harvester reaching eof but the filebeat keeps going.
Flebeat conf:
filebeat.inputs:
- type: log
close_eof: true
enabled: true
paths:
- "${LOGS_PATH}"
scan_frequency: 1s
fields: {
machine: "${HOST}"
}
output.logstash:
hosts: ["192.168.41.6:5044"]
bulk_max_size: 1024
timeout: 30s
pipelining: 1
workers: 1
And I run it using:
filebeat run --once -v -c "PATH TO CONF..."
And some logs from the filebeat instance:
...
2020-02-04T18:30:16.950Z INFO instance/beat.go:297 Setup Beat: filebeat; Version: 7.5.2
2020-02-04T18:30:17.059Z INFO [publisher] pipeline/module.go:97 Beat name: logstash
2020-02-04T18:30:17.167Z WARN beater/filebeat.go:152 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.168Z INFO instance/beat.go:429 filebeat start running.
2020-02-04T18:30:17.168Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-02-04T18:30:17.168Z INFO registrar/migrate.go:104 No registry home found. Create: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat
2020-02-04T18:30:17.179Z INFO registrar/migrate.go:112 Initialize registry meta file
2020-02-04T18:30:17.192Z INFO registrar/registrar.go:108 No registry file found under: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json. Creating a new re
gistry file.
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:145 Loading registrar data from /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2020-02-04T18:30:17.193Z WARN beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.193Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-04T18:30:17.194Z INFO log/input.go:152 Configured paths: [/tmp/tmp.BXJtfiaEzb/*.log]
2020-02-04T18:30:17.206Z INFO input/input.go:114 Starting input of type: log; ID: 13918413832820009056
2020-02-04T18:30:17.225Z INFO input/input.go:167 Stopping Input: 13918413832820009056
2020-02-04T18:30:17.225Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-04T18:30:17.225Z INFO log/harvester.go:251 Harvester started for file: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:384 Running filebeat once. Waiting for completion ...
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:386 All data collection completed. Shutting down.
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:139 Stopping Crawler
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:149 Stopping 1 inputs
2020-02-04T18:30:17.258Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:30:17.296Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... Only metrics here ...
2020-02-04T18:35:55.686Z INFO log/harvester.go:274 End of file reached: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log. Closing because close_eof is enabled.
2020-02-04T18:35:55.686Z INFO crawler/crawler.go:165 Crawler stopped
... MORE METRICS ...
2020-02-04T18:36:26.609Z ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 192.168.41.6:49662->192.168.41.6:5044: i/o timeout
2020-02-04T18:36:26.621Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-04T18:36:28.520Z ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-04T18:36:28.520Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:36:28.521Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... MORE METRICS ...
From this I'm outputing this to Logstash 7.5.2 running in the same Ubuntu 18 VM. Running Logstash with log level trace does not output any error.

Unable to execute the script file using ansible playbook for redislbas & jmeter

I am trying to install redislabs and jmter using ansible playbook, but unable to execute the script using playbook. Please find my playbook and error as well.
ERROR:
fatal: [localhost]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 127, "stderr": "/home/ansibleadm/.ansible/tmp/ansible-tmp-1576768466.18-58336526997867/jmeter.sh: line 109: /home/ansibleadm/.ansible/tmp/ansible-tmp-1576768466.18-58336526997867/jmeter: No such file or directory\n", "stderr_lines": ["/home/ansibleadm/.ansible/tmp/ansible-tmp-1576768466.18-58336526997867/jmeter.sh: line 109: /home/ansibleadm/.ansible/tmp/ansible-tmp-1576768466.18-58336526997867/jmeter: No such file or directory"], "stdout": "", "stdout_lines": []}
Note: Below error for jmeter and getting same error for redislabs as well. like No such file or directory
cat jmeter.yaml
hosts: localhost
user: ansibleadm
connection: local
become: yes
become_method: sudo
tasks:
name: creating jmeter directory
file: path=/home/ansibleadm/jmeter state=directory mode=0700 owner=ansibleadm group=ansibleadm
name: downloading jmeter tar file
get_url:
url: http://apache.mirrors.tds.net//jmeter/source/apache-jmeter-5.2.1_src.tgz
dest: /home/ansibleadm/jmeter
name: untar the file
unarchive:
src: "/home/ansibleadm/jmeter/apache-jmeter-5.2.1_src.tgz"
dest: "/home/ansibleadm/jmeter"
name: executing jmeter.sh file
script: "/home/ansibleadm/jmeter/apache-jmeter-5.2.1/bin/jmeter.sh"
2: Please find the redislabs playbook and error:
hosts: redisgroup
user: ansibleadm
become: yes
become_method: sudo
tasks:
name: creating a directory for redislabs
file: path=/home/ansibleadm/remote_redis owner=ansibleadm group=ansibleadm mode=0700 state=directory
name: defining a variable
set_fact:
redis_variable: "/home/ansibleadm/remote_redis"
name: copy the tar file from src to destination.
copy: src=/home/ansibleadm/redislabs-5.4.6-18-rhel7-x86_64.tar dest="{{redis_variable}}/redislabs-5.4.6-18-rhel7-x86_64.tar"
name: untar the file
unarchive:
src: /home/ansibleadm/redislabs-5.4.6-18-rhel7-x86_64.tar
dest: "{{redis_variable}}"
name: execute the install.sh file in remote server
shell: "{{redis_variable}}/install.sh -y"
ERROR:
FAILED! => {"changed": true, "cmd": "/home/ansibleadm/remote_redis/install.sh -y", "delta": "0:00:04.792255", "end": "2019-12-20 02:33:32.430351", "msg": "non-zero return code", "rc": 1, "start": "2019-12-20 02:33:27.638096", "stderr": "/home/ansibleadm/remote_redis/install.sh: line 25: rlec_upgrade_tmpdir/upgrade_checks_error_codes.sh: No such file or directory\ntouch: cannot touch ‘/var/opt/redislabs/log/install.log’: No such file or directory\nchmod: cannot access ‘/var/opt/redislabs/log/install.log’: No such file or directory\n/home/ansibleadm/remote_redis/install.sh: line 64: /var/opt/redislabs/log/install.log: No such file or directory", "stderr_lines": ["/home/ansibleadm/remote_redis/install.sh: line 25: rlec_upgrade_tmpdir/upgrade_checks_error_codes.sh: No such file or directory", "touch: cannot touch ‘/var/opt/redislabs/log/install.log’: No such file or directory", "chmod: cannot access ‘/var/opt/redislabs/log/install.log’: No such file or directory", "/home/ansibleadm/remote_redis/install.sh: line 64: /var/opt/redislabs/log/install.log: No such file or directory"], "stdout": "/home/ansibleadm/remote_redis/install.sh: line 25: rlec_upgrade_tmpdir/upgrade_checks_error_codes.sh: No such file or directory\n2019-12-20 02:33:27 [.] Checking prerequisites\n2019-12-20 02:33:27 [.] Checking hardware requirements...\n2019-12-20 02:33:27 [!] The node’s hardware does not meet the minimum requirements for a production system: \nThe node has 2 cores (minimum is 4) and 7 GB RAM (minimum is 15 GB). \nConsider upgrading your hardware in the case of a production system.\n================================================================================\n\u001b[1m\u001b[91mRedis\u001b[90mLabs\u001b[0m Enterprise Cluster installer.\n================================================================================\n\n2019-12-20 02:33:28 \u001b[92m[.] Checking root access\u001b[0m\n2019-12-20 02:33:28 \u001b[33m[!] Running as user root, sudo is not required.\u001b[0m\n2019-12-20 02:33:28 \u001b[92m[.] Updating paths.sh\u001b[0m\n2019-12-20 02:33:28 \u001b[92m[.] Creating socket directory /var/opt/redislabs/run \u001b[0m\n2019-12-20 02:33:29 \u001b[92m[.] Deleting \u001b[1m\u001b[91mRedis\u001b[90mLabs\u001b[0m debug package if exist\u001b[0m\n2019-12-20 02:33:29 \u001b[92m[.] Installing \u001b[1m\u001b[91mRedis\u001b[90mLabs\u001b[0m packages\u001b[0m\n2019-12-20 02:33:29 \u001b[37m[$] executing: 'yum install -y redislabs-5.4.6-18.rhel7.x86_64.rpm redislabs-utils-5.4.6-18.rhel7.x86_64.rpm'\u001b[0m\n\u001b[90mLoaded plugins: enabled_repos_upload, package_upload, product-id, search-\n : disabled-repos, subscription-manager, tracer_upload\nNo package redislabs-5.4.6-18.rhel7.x86_64.rpm available.\nNo package redislabs-utils-5.4.6-18.rhel7.x86_64.rpm available.\nError: Nothing to do\nUploading Enabled Repositories Report\nLoaded plugins: product-id, subscription-manager\n\u001b[0m2019-12-20 02:33:32 \u001b[91m[x] yum install failed\u001b[0m", "stdout_lines": ["/home/ansibleadm/remote_redis/install.sh: line 25: rlec_upgrade_tmpdir/upgrade_checks_error_codes.sh: No such file or directory", "2019-12-20 02:33:27 [.] Checking prerequisites", "2019-12-20 02:33:27 [.] Checking hardware requirements...", "2019-12-20 02:33:27 [!] The node’s hardware does not meet the minimum requirements for a production system: ", "The node has 2 cores (minimum is 4) and 7 GB RAM (minimum is 15 GB). ", "Consider upgrading your hardware in the case of a production system.",
In the last step change script: to shell:.
The script tasks "uploads" the script to the target host and executes the uploaded one, but it is uploaded into a temporary directory (see the ansible-tmp-XXXXXXX in the error output). The script (jmeter.sh) then tries to find jmeter in that directory but obviously it is not there. By using shell: instead it will just run the script from the proper place.

Filebeat not starting TCP server (input)

So I have configured filebeat to accept input via TCP. This is filebeat.yml file.
filebeat.inputs:
- type: tcp
host: ["localhost:9000"]
max_message_size: 20MiB
For some reason filebeat does not start the TCP server at port 9000. I have verified this using wireshark. Wireshark shows nothing at port 9000.
This is output of command "filebeat -e -d "*"" run on terminal
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:468 Home path: [/usr/local/Cellar/filebeat/6.2.4] Config path: [/usr/local/etc/filebeat] Data path: [/usr/local/var/lib/filebeat] Logs path: [/usr/local/var/log/filebeat]
2019-08-14T09:12:40.745-0600 DEBUG [beat] instance/beat.go:495 Beat metadata path: /usr/local/var/lib/filebeat/meta.json
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:475 Beat UUID: 764da0fd-ea93-4777-b1ea-63149be0d6b6
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.4
2019-08-14T09:12:40.745-0600 DEBUG [beat] instance/beat.go:230 Initializing output plugins
2019-08-14T09:12:40.745-0600 DEBUG [processors] processors/processor.go:49 Processors:
2019-08-14T09:12:40.745-0600 INFO pipeline/module.go:76 Beat name: Ad-MBP.domain
2019-08-14T09:12:40.745-0600 ERROR fileset/modules.go:95 Not loading modules. Module directory not found: /usr/local/Cellar/filebeat/6.2.4/module
2019-08-14T09:12:40.745-0600 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:301 filebeat start running.
2019-08-14T09:12:40.745-0600 DEBUG [registrar] registrar/registrar.go:90 Registry file set to: /usr/local/var/lib/filebeat/registry
2019-08-14T09:12:40.746-0600 INFO registrar/registrar.go:110 Loading registrar data from /usr/local/var/lib/filebeat/registry
2019-08-14T09:12:40.746-0600 INFO registrar/registrar.go:121 States Loaded from registrar: 0
2019-08-14T09:12:40.746-0600 WARN beater/filebeat.go:261 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-08-14T09:12:40.746-0600 INFO crawler/crawler.go:48 Loading Prospectors: 1
2019-08-14T09:12:40.746-0600 DEBUG [registrar] registrar/registrar.go:152 Starting Registrar
2019-08-14T09:12:40.746-0600 DEBUG [cfgfile] cfgfile/reload.go:95 Checking module configs from: /usr/local/etc/filebeat/modules.d/*.yml
2019-08-14T09:12:40.746-0600 DEBUG [cfgfile] cfgfile/reload.go:109 Number of module configs found: 0
2019-08-14T09:12:40.746-0600 INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 0
2019-08-14T09:12:40.746-0600 INFO cfgfile/reload.go:127 Config reloader started
2019-08-14T09:12:40.748-0600 DEBUG [cfgfile] cfgfile/reload.go:151 Scan for new config files
2019-08-14T09:12:40.748-0600 DEBUG [cfgfile] cfgfile/reload.go:170 Number of module configs found: 0
2019-08-14T09:12:40.748-0600 INFO cfgfile/reload.go:219 Loading of config files completed.
I am not sure what I am doing wrong..
I believe filebeat inputs are only available from filebeat 6.3+, anything older used filebeat prospectors.
6.3 TCP input documentation, nothing available for 6.2 or older as it uses prospectors:
https://www.elastic.co/guide/en/beats/filebeat/6.3/filebeat-input-tcp.html
Your logs show that you are on filebeat version 6.24, could you try out your configuration with 6.3+?

Problem with filebeat yml file on Windows

I am quite new to the Elastic stack and trying to experiment with visualization of apache log files in Kibana. I am using filebeat to ingest the apache logs. However when I run .\filebeat.exe setup -e, I get the following error:
2019-02-05T20:53:10.515+0530 INFO elasticsearch/client.go:165 Elasticsearch url: http://localhost:9200
2019-02-05T20:53:10.520+0530 INFO elasticsearch/client.go:721 Connected to Elasticsearch version 6.6.0
2019-02-05T20:53:10.520+0530 INFO kibana/client.go:118 Kibana url: http://localhost:5601
2019-02-05T20:53:10.567+0530 WARN fileset/modules.go:388 X-Pack Machine Learning is not enabled
2019-02-05T20:53:10.572+0530 ERROR instance/beat.go:911 Exiting: 1 error: error loading config file: invalid con
fig: yaml: line 4: did not find expected hexdecimal number
My filebeat.yml file looks like this:
filebeat.inputs:
- type: log
enabled: true
paths: C:\Users\bigdataadmin\Downloads\ApacheLogs\*
#============================= Filebeat modules ===============================
filebeat.config.modules:
path: C:\Program Files\Filebeat\modules.d\*.yml
reload.enabled: true
reload.period: 60s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
host: "localhost:5601"
output.elasticsearch:
hosts: ["localhost:9200"]
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
I also checked the yml on http://www.yamllint.com/ but didn't find any problems. I can't seem to figure out what's wrong with line 4 of this file.
I am using filebeat 6.6
The path key(on line 4) is an array. So you need to represent an array there.
Example :
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\Users\bigdataadmin\Downloads\ApacheLogs\*
Please be very cautious about the data type you are representing in such config files, I had made the same mistake while I was working on Filebeat and I had to spend a lot of time for a small mistake...