ELK
Prerequisites
docker.elastic.co/elasticsearch/elasticsearch   8.5.0     65719a2cfaea   2 years ago     1.29GB
docker.elastic.co/logstash/logstash             8.5.0     df0e328413e1   2 years ago     753MB
docker.elastic.co/kibana/kibana                 8.5.0     4737b8e943ba   2 years ago     715MBTo deploy an ELK Stack using the Docker images have provided for Elasticsearch, Logstash, and Kibana, you can follow these steps. I'll guide you through the process of deploying and connecting the three components (Elasticsearch, Logstash, and Kibana) using Docker.
1. Prerequisites
- Ensure you have Docker installed and running on your machine. 
- You should have the Docker images for - elasticsearch,- logstash, and- kibanaavailable (the versions you've mentioned).
- Make sure to allocate sufficient resources (RAM, CPU) for Elasticsearch (it can be resource-intensive). 
2. Create a Docker Network
First, create a custom Docker network so that the three containers can communicate with each other:
docker network create elk3. Deploy Elasticsearch Container
Elasticsearch is the central component of the ELK stack and needs to be set up first. Here's how to deploy it:
docker run -d --name elasticsearch \
  --network elk \
  -e "discovery.type=single-node" \
  -e "ES_JAVA_OPTS=-Xms2g -Xmx2g" \
  -p 9200:9200 \
  docker.elastic.co/elasticsearch/elasticsearch:8.5.0Explanation of the flags:
- --name elasticsearch: The name of the container.
- --network elk: Connects the container to the- elknetwork.
- -e "discovery.type=single-node": This option makes Elasticsearch run as a single node (important for local setups).
- -e "ES_JAVA_OPTS=-Xms2g -Xmx2g": Allocates memory (2 GB for both min and max).
- -p 9200:9200: Exposes port 9200 for Elasticsearch's REST API.
- docker.elastic.co/elasticsearch/elasticsearch:8.5.0: Specifies the Elasticsearch Docker image.
4. Deploy Logstash Container
Logstash is used to process logs and send them to Elasticsearch. Here’s how to deploy it:
docker run -d --name logstash \
  --network elk \
  -p 5044:5044 \
  -e "LOGSTASH_JAVA_OPTS=-Xms1g -Xmx1g" \
  docker.elastic.co/logstash/logstash:8.5.0Explanation of the flags:
- --name logstash: The name of the container.
- --network elk: Connects the container to the- elknetwork.
- -p 5044:5044: Exposes port 5044 for Logstash’s Beats input (e.g., Filebeat).
- -e "LOGSTASH_JAVA_OPTS=-Xms1g -Xmx1g": Allocates memory (1 GB for both min and max).
- docker.elastic.co/logstash/logstash:8.5.0: Specifies the Logstash Docker image.
You can also mount your Logstash configuration file (logstash.conf) if needed, for example:
docker run -d --name logstash \
  --network elk \
  -v /path/to/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
  -p 5044:5044 \
  docker.elastic.co/logstash/logstash:8.5.0Make sure to configure logstash.conf for your input (e.g., Filebeat), filter, and output (Elasticsearch).
5. Deploy Kibana Container
Kibana is used for data visualization in the ELK stack. Here's how to deploy it:
docker run -d --name kibana \
  --network elk \
  -p 5601:5601 \
  docker.elastic.co/kibana/kibana:8.5.0Explanation of the flags:
- --name kibana: The name of the container.
- --network elk: Connects the container to the- elknetwork.
- -p 5601:5601: Exposes port 5601 for Kibana's web interface.
- docker.elastic.co/kibana/kibana:8.5.0: Specifies the Kibana Docker image.
6. Verify the Deployment
- Once all containers are running, check their status: - docker ps
- You should see the Elasticsearch, Logstash, and Kibana containers running. 
- Elasticsearch should be accessible at - http://localhost:9200. You can verify it by making a request:- curl http://localhost:9200- It should return JSON data about your Elasticsearch node. 
- Kibana should be accessible at - http://localhost:5601. Open this in your browser, and you should see the Kibana web interface.
7. Configure Logstash Input and Output
If you're planning to use Filebeat or any other log shipper, you will need to configure Logstash with the appropriate input and output settings. For example, you can configure Logstash to receive logs from Filebeat and forward them to Elasticsearch.
Here’s an example logstash.conf:
input {
  beats {
    port => 5044
  }
}
filter {
  # Custom filtering (if necessary)
}
output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
  }
}You can mount this configuration file into the Logstash container using the -v flag.
8. Access Kibana and Explore Data
Once Kibana is up and running, you can access it via the browser at http://localhost:5601. Here, you can set up index patterns, explore your logs, and create visualizations and dashboards based on the data sent from Logstash.
To simplify the deployment of the ELK Stack using Docker, you can create a docker-compose.yml file. This YAML file will allow you to define and configure all the required services (Elasticsearch, Logstash, and Kibana) in a single file, and then spin them up using the docker-compose command.
Here’s a step-by-step guide to create the docker-compose.yml file for your ELK stack and deploy it:
1. Create a docker-compose.yml File
docker-compose.yml FileFirst, create a new directory for your ELK Stack project:
mkdir elk-stack
cd elk-stackNow, create a file called docker-compose.yml inside this directory:
touch docker-compose.ymlEdit the docker-compose.yml file with the following content:
yamlCopyversion: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.5.0
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
      - ES_JAVA_OPTS=-Xms2g -Xmx2g
    networks:
      - elk
    ports:
      - "9200:9200"
    volumes:
      - elasticsearch-data:/usr/share/elasticsearch/data
    ulimits:
      memlock:
        soft: -1
        hard: -1
  logstash:
    image: docker.elastic.co/logstash/logstash:8.5.0
    container_name: logstash
    environment:
      - LOGSTASH_JAVA_OPTS=-Xms1g -Xmx1g
    networks:
      - elk
    ports:
      - "5044:5044"
    volumes:
      - ./logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    depends_on:
      - elasticsearch
  kibana:
    image: docker.elastic.co/kibana/kibana:8.5.0
    container_name: kibana
    networks:
      - elk
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
networks:
  elk:
    driver: bridge
volumes:
  elasticsearch-data:
    driver: localExplanation of the Configuration:
- version: '3': The version of the Docker Compose file format (version 3 is commonly used).
- services: Defines the services (Elasticsearch, Logstash, and Kibana) that will be spun up.- Elasticsearch: - The - discovery.type=single-nodeis for single-node Elasticsearch deployments (for local environments).
- The - ES_JAVA_OPTS=-Xms2g -Xmx2gconfigures Java heap memory.
- It exposes port - 9200for communication.
- Data is stored in a Docker volume called - elasticsearch-data.
 
- Logstash: - The - logstash.confconfiguration file is mounted into the container for custom processing rules (you’ll need to create this configuration).
- Exposes port - 5044for receiving log data from filebeat (or any other log shipper).
- depends_onensures Logstash starts after Elasticsearch is ready.
 
- Kibana: - Exposes port - 5601for the Kibana web interface.
- depends_onensures Kibana starts after Elasticsearch is ready.
 
 
- networks: All services are connected to a custom Docker bridge network called- elkso that they can communicate with each other.
- volumes: Elasticsearch data is stored in a named volume (- elasticsearch-data), so it persists even if the container is stopped or removed.
2. Create the Logstash Configuration File
Now, create the Logstash pipeline configuration file in the logstash/config directory. You can use the following command to create the directory and configuration file:
mkdir -p logstash/config
touch logstash/config/logstash.confEdit the logstash.conf file and add the following basic configuration (or customize it based on your needs):
plaintextCopyinput {
  beats {
    port => 5044
  }
}
filter {
  # Your filter configuration (optional)
}
output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
  }
}This configuration tells Logstash to listen for log data from Beats (e.g., Filebeat) on port 5044 and send the data to Elasticsearch. The logs will be indexed with the logstash-* pattern in Elasticsearch.
3. Spin Up the Containers with Docker Compose
Now that your docker-compose.yml and Logstash configuration are ready, use Docker Compose to spin up the containers.
From the directory containing the docker-compose.yml file, run:
docker-compose up -dThis will:
- Pull the Docker images for Elasticsearch, Logstash, and Kibana. 
- Create and start the containers. 
- Attach the containers to the custom Docker network ( - elk).
4. Verify the Deployment
You can check if everything is running correctly using:
docker psYou should see all three containers running (Elasticsearch, Logstash, and Kibana).
5. Access Kibana
Once the containers are up and running, open your web browser and go to:
http://localhost:5601This will open the Kibana web interface where you can interact with the data stored in Elasticsearch, configure dashboards, create visualizations, and explore logs.
6. Shutting Down the Stack
To stop and remove the containers, run:
docker-compose downThis will stop the containers and remove them. If you want to remove the volumes as well (which deletes your Elasticsearch data), run:
docker-compose down -vAdditional Customization
- You can customize the Logstash pipeline configuration further based on your needs (e.g., different input/output plugins or filters). 
- You can also tweak the memory settings for Elasticsearch and Logstash if needed. 
Last updated