Packetbeat dashboard installation - packetbeat

I am trying to install packetbeat dashboard and this command works as expected. I have installed matching version of Kibana.
docker run docker.elastic.co/beats/packetbeat:5.5.0 ./scripts/import_dashboards -es http://172.31.73.234:9200
When I try to install latest version of packetbeat, I get this error:
docker run docker.elastic.co/beats/packetbeat:6.1.3 ./scripts/import_dashboards -es http://1.2.3.4:9200
/usr/local/bin/docker-entrypoint: line 13: /usr/share/packetbeat/packetbeat: Operation not permitted
I have checked that packetbeat and kibana are using the same version 6.1.3
1) Why does line 13 fails in case of version 6.1.3 and not in 5.5.0?
2) Is there any other way to install packetbeat using docker?
Update:
In other words, this works where elastic and packetbeat both using the same version 5.6.7:
docker run docker.elastic.co/beats/packetbeat:5.6.7 ./scripts/import_dashboards -es https://0457e68d58e2479e1e73facc72f6cc56.us-east-1.aws.found.io:9243 -user elastic -pass XXX
But this does not with either elastic version 6 or kibana API:
# docker run docker.elastic.co/beats/packetbeat:6.1.3 ./scripts/import_dashboards -es https://db301e3a9602f088035cc828312ebdf2.us-east-1.aws.found.io:9243 -user elastic -pass xxx
/usr/local/bin/docker-entrypoint: line 13: /usr/share/packetbeat/packetbeat: Operation not permitted
# docker run docker.elastic.co/beats/packetbeat:5.6.7 ./scripts/import_dashboards -es https://db301e3a9602f088035cc828312ebdf2.us-east-1.aws.found.io:9243 -user elastic -pass xxx
Initialize the Elasticsearch 6.1.3 loader
Elasticsearch URL https://db301e3a9602f088035cc828312ebdf2.us-east-1.aws.found.io:9243
For Elasticsearch version >= 6.0.0, the Kibana dashboards need to be imported via the Kibana API.
# docker run docker.elastic.co/beats/packetbeat:6.1.3 ./scripts/import_dashboards -es https://c2ddaa70b10cb93643b031042d4f6554.us-east-1.aws.found.io:9243 -user elastic -pass xxx
/usr/local/bin/docker-entrypoint: line 13: /usr/share/packetbeat/packetbeat: Operation not permitted
# docker run docker.elastic.co/beats/packetbeat:5.6.7 ./scripts/import_dashboards -es https://c2ddaa70b10cb93643b031042d4f6554.us-east-1.aws.found.io:9243 -user elastic -pass xxx
fail to create the Elasticsearch loader: Error creating Elasticsearch client: Couldn't connect to any of the configured Elasticsearch hosts
Exiting

This is something close to what I wanted to achieve. It is not based on docker, but it works!
1) Download packetbeat:
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.1.3-x86_64.rpm
sudo rpm -vi packetbeat-5.4.1-x86_64.rpm
cd /usr/share/packetbeat/
2) Configure packetbeat.yml file:
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["611878ce312a4bc30040208f62a9c9341.us-east-1.aws.found.io:9243"]
# Optional protocol and basic auth credentials.
protocol: "https"
username: "elastic"
password: "xxx"
#============================== Kibana =====================================
setup.kibana:
host: "https://b0440709b5f76af035e0a5915a763ebf1.us-east-1.aws.found.io:9243"
#============================== Dashboards =====================================
setup.dashboards.enabled: true
3) Start packetbeat service
/etc/init.d/packetbeat restart

Related

How do I resolve Invalid SSH Key Entry error when starting App with GCE

I'm trying to launch my app on Google Compute Engine, and I get the following error:
Sep 26 22:46:09 debian google_guest_agent[411]: ERROR non_windows_accounts.go:199 Invalid ssh key entry - unrecognized format: ssh-rsa AAAAB...
I'm having a hard time interpreting it. I have the following startup script:
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPOSITORY="github_sleepywakes_thunderroost"
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.15.0/node-v16.15.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/${PROJECTID}/r/${REPOSITORY} /opt/app/github_sleepywakes_thunderroost
# Install app dependencies
cd /opt/app/github_sleepywakes_thunderroost
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/github_sleepywakes_thunderroost
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
My instance shows I have 2 public SSH keys. The second begins like this one in the error, but after about 12 characters it is different.
Any idea why this might be occurring?
Thanks in advance.
Once you deployed your VM instance, its a default setting that the SSH key isn't
configure yet, but you can also configure the SSH key upon deploying the VM instance.
To elaborate the answer of #JohnHanley, I tried to test in my environment.
Created a VM instance, verified the SSH configuration. As a default configuration there's no SSH key configured as I said earlier you can configure SSH key upon deploying the VM
Created a SSH key pair via CLI, you can use this link for instruction details
Navigate your VM instance, Turn off > EDIT > Security > Add Item > SSH key 1 - copy+paste generated SSH key pair > Save > Power ON VM instance
Then test the VM instance if accessible.
Documentation link How to Add SSH keys to project metadata.

Docker-compose can't start apache server

When i'm running sudo docker-compose up inside my dir, i get this error. I'm trying to make a container, that host a php website, where you can do whoami on it.
Thanks
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
| no listening sockets available, shutting down
| AH00015: Unable to open logs
Dockerfile:
FROM ubuntu:16.04
RUN apt update
RUN apt install -y apache2 php libapache2-mod-php
RUN useradd -d /home/cp/ -m -s /bin/nologin cp
WORKDIR /home/cp
COPY source .
USER cp
ENTRYPOINT service apache2 start && /bin/bash
docker-compose.yml
version: '2'
services:
filebrowser:
build: .
ports:
- '8000:80'
stdin_open: true
tty: true
volumes:
- ./source:/var/www/html
- ./logs:/var/log/apache2
There's a long-standing general rule in Unix-like operating systems that only the root user can open "low" ports 0-1023. Since you're trying to run Apache on the default HTTP port 80, but you're running it as a non-root user, you're getting the "permission denied" error you see.
The absolute easiest answer here is to use a prebuilt image that has PHP and Apache preinstalled. The Docker Hub php image includes a variant of this. You can use a simpler Dockerfile:
FROM php:7.4-apache
# Has Apache, mod-php preinstalled and a correct CMD already,
# so the only thing you need to do is
COPY source /var/www/html
# If you want to run as a non-root user, you can specify
RUN useradd -r -U cp
ENV APACHE_RUN_USER cp
ENV APACHE_RUN_GROUP cp
With the matching docker-compose.yml
version: '3' # version 2 vs 3 doesn't really matter
services:
filebrowser:
build: .
ports:
- '8000:80'
volumes:
- ./logs:/var/log/apache2
If you want to build things up from scratch, the next easiest option would be the Apache User directive: have your container start as root (so it can bind to port 80) but then instruct Apache to switch to the unprivileged user once it's started up. The standard php:...-apache image has an option to do this on its own which I've shown above.

BitBucket deployment using SSH keys to remote server

I am trying to write a YAML pipeline script to deploy files that have been altered from my bitbucket repository to my remote server using ssh keys. The document that I have in place at the moment was copied from bitbucket itself and has errors:
pipelines:
default:
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/sftp-deploy:0.3.1
- variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: $REMOTE_PATH
LOCAL_PATH: $LOCAL_PATH
I am getting the following error
Configuration error
There is an error in your bitbucket-pipelines.yml at [pipelines > default > 0 > step > script > 1]. To be precise: Missing or empty command string. Each item in this list should either be a single command string or a map defining a pipe invocation.
My ssh public and private keys are setup in bitbucket along with the fingerprint and host. The variables have also been setup.
How do I go about setting up my YAML deploy script to connect to my remote server via ssh and transfer the files?
Try to update the variables section become:
- variables:
- USER: $USER
- SERVER: $SERVER
- REMOTE_PATH: $REMOTE_PATH
- LOCAL_PATH: $LOCAL_PATH
Here is am example about how to set variables: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_variablesvariables
Your directive - step has to be intended.
I have bitbucket-pipelines.yml like that (using rsync instead of ssh):
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:7.2.1-fpm
pipelines:
default:
- step:
script:
- apt-get update
- apt-get install zip -y
- apt-get install unzip -y
- apt-get install libgmp3-dev -y
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- cp .env.example .env
#- vendor/bin/phpunit
- pipe: atlassian/rsync-deploy:0.2.0
variables:
USER: $DEPLOY_USER
SERVER: $DEPLOY_SERVER
REMOTE_PATH: $DEPLOY_PATH
LOCAL_PATH: '.'
I suggest to use their online editor in repository for editing bitbucket-pipelines.yml, it checks all formal yml structure and you can't commit invalid file.
Even if you check file on some other yaml editor, it may look fine, but not necessary according to bitbucket specification. Their online editor does fine job.
Also, I suggest to visit their community on atlasian community as it's very active, sometimes their staff members are providing answers.
However, I struggle with plenty dependencies needed to run tests properly. (actual bitbucket-pipelines.yml is becoming bigger and bigger).
Maybe there is some nicely prepared Docker image for this job.

How to install schema registry

I am looking options to install confluent schema registry, is it possible to download and install registry alone and make it work with existing kafka setup ?
Thanks
Assuming you have Zookeeper/Kafka running already, you can easily run Confulent Schema Registry using Docker with running the following command:
docker run -p 8081:8081 -e \
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=host.docker.internal:2181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:5.3.2
parameters:
-p 8081:8081 - will open the port 8081 between the container to your machine
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL - is your Zookeeper host and port, I'm using host.docker.internal to resolve local machine that is hosting Zookeeper (outside of the container)
SCHEMA_REGISTRY_HOST_NAME - The hostname advertised in Zookeeper. This is required if if you are running Schema Registry with multiple nodes. Hostname is required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment.
SCHEMA_REGISTRY_LISTENERS - the Schema Registry host and port number to open
SCHEMA_REGISTRY_DEBUG Run in debug mode
note: the script was using the version 5.3.2, make sure this version is aligned with your Kafka version.
Yes you can use your existing Kafka setup, just match to the compatible version of Confluent Platform. Here are the docs on getting started
https://docs.confluent.io/current/schema-registry/docs/intro.html#installation
tl;dr download the platform to pull out the pieces you need or get the docker image and point it at your Kafka cluster.

Getting gitlab-runner 10.0.2 cloning repo using ssh

I have a gitlab installation and I am trying to setup a gitlab-runner using a docker executor. All ok until tests start running and then since my projects are private and they have no http access enabled, they fail at clone time with:
Running with gitlab-runner 10.0.2 (a9a76a50)
on Jupiter-docker (5f4ed288)
Using Docker executor with image fedora:26 ...
Using docker image sha256:1f082f05a7fc20f99a4ccffc0484f45e6227984940f2c57d8617187b44fd5c46 for predefined container...
Pulling docker image fedora:26 ...
Using docker image fedora:26 ID=sha256:b0b140824a486ccc0f7968f3c6ceb6982b4b77e82ef8b4faaf2806049fc266df for build container...
Running on runner-5f4ed288-project-5-concurrent-0 via 2705e39bc3d7...
Cloning repository...
Cloning into '/builds/pmatos/tob'...
remote: Git access over HTTP is not allowed
fatal: unable to access 'https://gitlab.linki.tools/pmatos/tob.git': The requested URL returned error: 403
ERROR: Job failed: exit code 1
I have looked into https://docs.gitlab.com/ee/ci/ssh_keys/README.html
and decided to give it a try so my .gitlab-ci.yml starts with:
image: fedora:26
before_script:
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# For Docker builds disable host key checking. Be aware that by adding that
# you are suspectible to man-in-the-middle attacks.
# WARNING: Use this only with the Docker executor, if you use it with shell
# you will overwrite your user's SSH config.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
... JOBS...
I setup the SSH_PRIVATE_KEY correctly, etc but the issue is that the cloning of the project happens before before_script. I then tried to start the container with -v /home/pmatos/gitlab-runner_ssh:/root/.ssh but still the cloning is trying to use HTTP. How can I force the container to clone through ssh?
Due to the way gitlab CI works, CI requires https access to the repository. Therefore if you enable CI, you need to have https repo access enabled as well.
This is however, not an issue privacy wise as making the container https accessible doesn't stop gitlab from checking if you're authorized to access it.
I then tried to start the container with -v /home/pmatos/gitlab-runner_ssh:/root/.ssh but still the cloning is trying to use HTTP
Try at least if possible within your container to add a
git config --global url.ssh://git#.insteadOf https://
(assuming the ssh user is git)
That would make any clone of any https URL use ssh.