How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 20.04

How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 20.04

ELK Stack is a full-featured data analytics platform, consists of three open-source tools ElasticsearchLogstash, and Kibana. This stack helps you store and manage logs centrally and gives an ability to analyze them.

In this post, we will see how to install the ELK stack on Ubuntu 20.04.

Install ELK Stack

Log Monitoring With ELK Stack
Log Monitoring With ELK Stack

Beats – Installed on client machines, and it collects and sends logs to Logstash.

Logstash – Processing of logs sent by beats (installed on client machines).

Elasticsearch – Stores logs and events from Logstash and offers an ability to search the logs in a real-time

Kibana – Provides visualization of events and logs.

Install Java

Elasticsearch requires either OpenJDK or Oracle JDK available on your machine.

Here, for this demo, I am using OpenJDK. Install Java using the below command along with the wget and HTTPS support package for APT.

sudo apt update

sudo apt install -y openjdk-11-jdk wget apt-transport-https curl

Check the Java version.

java -version


openjdk version "11.0.7" 2020-04-14
OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)
OpenJDK 64-Bit Server VM (build 11.0.7+10-post-Ubuntu-3ubuntu1, mixed mode, sharing)

Add ELK repository

ELK stack packages are available in the Elastic official repository.

wget -qO - | sudo apt-key add -

echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Install & Configure Elasticsearch

Elasticsearch is an open-source search engine that provides the real-time distributed, multitenant-capable full-text search engine with a web interface (HTTP) and schema-free JSON documents.

Install the latest version of Elasticsearch using the apt command.

sudo apt update

sudo apt install -y elasticsearch-oss

Start and enable the Elasticsearch service.

sudo systemctl start elasticsearch

sudo systemctl enable elasticsearch

Wait for a minute or two and then run the below command to see the status of the Elasticsearch.

curl -X GET http://localhost:9200


  "name" : "ubuntu.itzgeek.local",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "AB9giOoWQo2nReENAICKig",
  "version" : {
    "number" : "7.7.1",
    "build_flavor" : "oss",
    "build_type" : "deb",
    "build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423",
    "build_date" : "2020-05-28T16:30:01.040088Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  "tagline" : "You Know, for Search"

The above output confirms that Elasticsearch is up and running fine.

Install & Configure Logstash

Logstash is open-source log-parsing software that collects logs, parse, and stores them on Elasticsearch for future use. With the help of available plugins, it can process different types of events with no extra work.

sudo apt install -y logstash-oss

Logstash configuration consists of three plugins, namely input, filter, and the output. You can put all plugins details in a single file or separate file for each section, end with .conf.

Here, we will use a single file for placing all the three plugins.

Create a configuration file under /etc/logstash/conf.d/ directory.

 sudo nano /etc/logstash/conf.d/logstash.conf

In the input plugin, we will configure Logstash to listen on port 5044 for incoming logs from the agent (Beats) that is running on client machines.

input {
  beats {
    port => 5044

For the filter plugin, we will use Grok to parse syslog messages ahead of sending it to Elasticsearch for storing.

filter {
  if [type] == "syslog" {
     grok {
        match => { "message" => "%{SYSLOGLINE}" }
     date {
        match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

In the output plugin, we will define where logs to get stored, obviously an Elasticsearch instance.

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

Now start and enable the Logstash service.

sudo systemctl start logstash

sudo systemctl enable logstash

Logstash log:

sudo cat /var/log/logstash/logstash-plain.log

Install and Configure Kibana

Kibana provides visualization of data stored on an Elasticsearch instance. Install Kibana using the apt command.

sudo apt install -y kibana-oss

By default, Kibana listens on the localhost, which means you can not access the Kibana web interface from external machines. To access Kibana from external machines, you need to set the to the system IP address in /etc/kibana/kibana.yml file.

sudo nano /etc/kibana/kibana.yml

Make a change like below. ""

Also, in some cases, Elasticsearch and Kibana may run on different machines. In that case, update the below line with the IP address of the Elasticsearch server.

elasticsearch.hosts: ["http://localhost:9200"]

Start and enable Kibana on machine startup.

sudo systemctl start kibana

sudo systemctl enable kibana

Install Filebeat

Filebeat is a software client that runs on the client machines to send logs to the Logstash server for parsing (in our case) or directly to Elasticsearch for storing.

We will use the Logstash server’s hostname in the configuration file. So, add a DNS record or a host entry for the Logstash server on the client machine.

sudo nano /etc/hosts

Make an entry something like below. server.itzgeek.local

Install HTTPS support for apt.

sudo apt update

sudo apt install -y apt-transport-https

Set up the Elastic repository on your system for Filebeat installation.

wget -qO - | sudo apt-key add -

echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Install Filebeat using the following command.

sudo apt update

sudo apt install -y filebeat

Edit the filebeat configuration file /etc/filebeat/filebeat.yml to send logs to the Logstash server.

sudo nano /etc/filebeat/filebeat.yml

The below configuration in the inputs section is to send system logs (/var/log/syslog) to the Logstash server.

For this demo, I have commented out /var/log/*.log to avoid sending all logs to the Logstash server.

.    .    .

#=========================== Filebeat inputs =============================


# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
    - /var/log/syslog
    #- /var/log/*.log

.    .    .

Since we are sending logs to the Logstash for parsing, comment out the section output.elasticsearch: and uncomment output.logstash: in the output section.

.    .    .

#----------------------------- Logstash output --------------------------------
  # The Logstash hosts
  hosts: ["server.itzgeek.local:5044"]

.    .    .

Start the Filebeat service.

sudo systemctl start filebeat

Filebeat’s log:

sudo cat /var/log/syslog

Access ELK Dashboard

Access the Kibana web interface by going to the following URL.




You would get the Kibana’s home page.

Kibana’s Starting Page
Kibana’s Starting Page

On your first access, you need to create the filebeat index. Go to Management » Index Patterns » Create Index Pattern.

Index Patterns
Index Patterns

Type the following in the Index pattern box.


You should see the filebeat index, something like below. Click the Next step.

Create Index Pattern
Create Index Pattern

Select @timestamp and then click on Create index pattern.

Time Filter Field Name
Time Filter Field Name
Check out the fields in the index pattern.


Click Discover in the left navigation to view the incoming logs from client machines.

Discover Events
Discover Events


That’s All. I hope you have learned how to install the ELK stack on Ubuntu 20.04.

Leave a Reply

Your email address will not be published. Required fields are marked *