Running Elastic stack (ELK) in Docker containers with Docker Compose

The Elastic Stack (ELK) consists of 3 open source components that work together to implement log collection, analysis, and visualization.

The 3 main components are:

Elasticsearch is the core of the Elastic software. It is a search and analytical mechanism. Its job in the Elastic stack is to store incoming logs from Logstash and provide the ability to search the logs in real time.

Logstash – used to collect data, convert logs coming from multiple sources at the same time, and send them to the repository.

Kibana is a graphical tool that offers data visualization. In the Elastic stack, it is used to create charts and graphs to give meaning to the raw data in your database.

1. Install Docker and Docker-Compose on Linux

# Debian/Ubuntu
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

# RHEL/CentOS/RockyLinux 8
sudo yum -y install yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io

# Fedora
sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io

Then add your system user to the docker group to run docker commands without using the sudo command.

sudo usermod -aG docker $USER
newgrp docker

Adding Docker-Compose

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Now start and enable docker.

sudo systemctl start docker && sudo systemctl enable docker

2. Provision of Elastic stack containers (ELK).

Let’s start by cloning files from Github

git clone https://github.com/deviantony/docker-elk.git
cd docker-elk

Open the deployment file for editing:

nano docker-compose.yml

The Elastic stack deployment file consists of 3 main parts

Elasticsearch – with ports:

9200: Elasticsearch HTTP 9300: Elasticsearch TCP transport

Logstash – with ports:

5044: Logstash Beats input 5000: Logstash TCP login 9600: Logstash monitoring API

Kibana – with port 5601

In the open file, you can make the following settings:

Configure Elasticsearch

The configuration file for Elasticsearch is stored in the elasticsearch/config/elasticsearch.yml file.

So you can set up the environment by setting the cluster name, network host and licensing as shown below

elasticsearch:
  environment:
    cluster.name: elk-cluster
    xpack.license.self_generated.type: basic

Configure Kibana

The configuration file is stored in the kibana/config/kibana.yml file.

Here you can set environment variables

kibana:
  environment:
    SERVER_NAME: kibana.automationtools.me

JVM setup

Typically, both Elasticsearch and Logstash start with 1/4 of the total host memory allocated to the JVM Heap Size.

You can customize the memory by setting the following options.

For Logstash (example with increasing memory up to 1 GB)

logstash:
  environment:
    LS_JAVA_OPTS: -Xmx1g -Xms1g

For Elasticsearch (example with memory increase up to 1 GB)

elasticsearch:
  environment:
    ES_JAVA_OPTS: -Xm1g -Xms1g

Setting up usernames and passwords

To customize usernames, passwords, and version, edit the .env file.

nano .env

Make any necessary changes to the version, usernames, and passwords.

ELASTIC_VERSION=<VERSION>

## Passwords for stack users
#

# User 'elastic' (built-in)
#
# Superuser role, full access to cluster management and data indices.
# <https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html>
ELASTIC_PASSWORD='Verystrongpassword'

# User 'logstash_internal' (custom)
#
# The user Logstash uses to connect and send data to Elasticsearch.
# <https://www.elastic.co/guide/en/logstash/current/ls-security.html>
LOGSTASH_INTERNAL_PASSWORD='Verystrongpassword'

# User 'kibana_system' (built-in)
#
# The user Kibana uses to connect and communicate with Elasticsearch.
# <https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html>
KIBANA_SYSTEM_PASSWORD='Verystrongpassword'

3. Configure persistent volumes.

In order for the Elastic stack to store data, we need to properly map the volumes.

In the YAML file, we have multiple volumes that need to be mapped.

In this guide, I will set up a secondary drive connected to my device.

Define a disk.

$ lsblk
NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda           8:0    0  40G  0 disk 
├─sda1        8:1    0   1G  0 part /boot
└─sda2        8:2    0  39G  0 part 
  ├─rl-root 253:0    0  35G  0 lvm  /
  └─rl-swap 253:1    0   4G  0 lvm  [SWAP]
sdb           8:16   0  10G  0 disk 
└─sdb1        8:17   0  10G  0 part

Format the drive and create an XFS file system on it.

sudo parted --script /dev/sdb "mklabel gpt"
sudo parted --script /dev/sdb "mkpart primary 0% 100%"
sudo mkfs.xfs /dev/sdb1

Mount the drive to the desired path.

sudo mkdir /mnt/datastore
sudo mount /dev/sdb1 /mnt/datastore

Check if the drive has been mounted.

$ sudo mount | grep /dev/sdb1
/dev/sdb1 on /mnt/datastore type xfs
(rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

Create persistent volumes on the disk.

sudo mkdir /mnt/datastore/setup
sudo mkdir /mnt/datastore/elasticsearch

Set the correct permissions.

sudo chmod 775 -R /mnt/datastore
sudo chown -R $USER:docker /mnt/datastore

On Rhel-based systems, set up SELinux as shown below.

sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

Create external volumes:

For elasticsearch

docker volume create --driver local \\
     --opt type=none \\
     --opt device=/mnt/datastore/elasticsearch \\
     --opt o=bind elasticsearch

For settings

docker volume create --driver local \\
     --opt type=none \\
     --opt device=/mnt/datastore/setup \\
     --opt o=bind setup

Check if the volumes have been created.

$ docker volume list
DRIVER    VOLUME NAME
local     elasticsearch
local     setup

View more detailed information about a volume.

$ docker volume inspect setup
[
    {
        "CreatedAt": "2022-05-06T13:19:33Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/setup/_data",
        "Name": "setup",
        "Options": {
            "device": "/mnt/datastore/setup",
            "o": "bind",
            "type": "none"
        },
        "Scope": "local"
    }
]

Go back to the YAML file and add these lines to the end of the file.

$ nano docker-compose.yml
.......
volumes:
  setup:
    external: true
  elasticsearch:
    external: true

You should now have a YAML file ready.

After making the necessary changes, start the Elastic stack with the command:

docker-compose up -d

Once completed, check if the containers are running:

$ docker ps
CONTAINER ID   IMAGE                      COMMAND                  CREATED          STATUS         PORTS                                                                                                                                                                        NAMES
096ddc76c6b9   docker-elk_logstash        "/usr/local/bin/dock…"   9 seconds ago    Up 5 seconds   0.0.0.0:5000->5000/tcp, :::5000->5000/tcp, 0.0.0.0:5044->5044/tcp, :::5044->5044/tcp, 0.0.0.0:9600->9600/tcp, 0.0.0.0:5000->5000/udp, :::9600->9600/tcp, :::5000->5000/udp   docker-elk-logstash-1
ec3aab33a213   docker-elk_kibana          "/bin/tini -- /usr/l…"   9 seconds ago    Up 5 seconds   0.0.0.0:5601->5601/tcp, :::5601->5601/tcp                                                                                                                                    docker-elk-kibana-1
b365f809d9f8   docker-elk_setup           "/entrypoint.sh"         10 seconds ago   Up 7 seconds   9200/tcp, 9300/tcp                                                                                                                                                           docker-elk-setup-1
45f6ba48a89f   docker-elk_elasticsearch   "/bin/tini -- /usr/l…"   10 seconds ago   Up 7 seconds   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp                                                                                         docker-elk-elasticsearch-1

Check if Elastic search is running:

$ curl http://localhost:9200 -u elastic:StrongPassw0rd1
{
  "name" : "45f6ba48a89f",
  "cluster_name" : "elk-cluster",
  "cluster_uuid" : "hGyChEAVQD682yVAx--iEQ",
  "version" : {
    "number" : "8.1.3",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "39afaa3c0fe7db4869a161985e240bd7182d7a07",
    "build_date" : "2022-04-19T08:13:25.444693396Z",
    "build_snapshot" : false,
    "lucene_version" : "9.0.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
  1. Access to the Kibana dashboard.

At this point, you can get started and access the Kibana dashboard running on port 5601.

Now access Kibana using the URL http://IP_Address:5601 or http://Domain_name:5601.

Log in using the credentials set for the Elasticsearch user:

Username: elastic
Password: Verystrongpassword

Now to prove that the ELK stack works as it should.

We will send some data/log entries.

Logstash allows us to send content over TCP as shown below.

cat /path/to/logfile.log | nc -q0 localhost 5000

For example:

cat /var/log/syslog | nc -q0 localhost 5000

Leave a Reply

Your email address will not be published. Required fields are marked *