Skip to main content

Setting up ELK 5.x (ElasticSearch, Logstash, Kibana) cluster

We recently upgraded from ElasticSearch 2.4.3 to 5.6.2.
Below are the steps that we used to install and configure new ELK cluster.

ElasticSearch

Installation

mkdir -p /work/elk/data/data-es562
mkdir -p /work/elk/data/logs-es562
mkdir -p /work/elk/data/repo-es562
cd /work/elk/
curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.2.tar.gz
tar -zxvf elasticsearch-5.6.2.tar.gz
curl -O https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.6.2.zip
cd /work/elk/elasticsearch-5.6.2
./bin/elasticsearch-plugin install file:///work/elk/x-pack-5.6.2.zip

Configuration


Settings for Master + Ingest node in elasticsearch.yml

We have reused master nodes as ingest nodes, because we don't have any heavy ingest pipelines, and x-pack monitoring requires at-least one ingest node to be present in the cluster.
cluster.name: ESDev562

node.name: "master_01"
node.master: true            # Enable the node.master role (enabled by default).
node.data: false             # Disable the node.data role (enabled by default).
node.ingest: true            # Enable the node.ingest role (enabled by default).
search.remote.connect: false # Disable cross-cluster search (enabled by default).
node.ml: false               # Disable the node.ml role (enabled by default in X-Pack).
xpack.ml.enabled: false      # The xpack.ml.enabled setting is enabled by default in X-Pack.
 
path.data: /work/elk/data/data-es562
path.logs: /work/elk/data/logs-es562
path.repo: /work/elk/data/repo-es562
 
network.host: 0.0.0.0
 
http.port: 9200
transport.tcp.port: 9300
 
discovery.zen.ping.unicast.hosts: ["master_01:9300", "master_02:9300", "master_03:9300"]
 
discovery.zen.minimum_master_nodes: 2
 
gateway.recover_after_nodes: 3
  
# By default all script types are allowed to be executed. The supported types are none, inline, file, indexed, stored
script.allowed_types: inline, file  # disables indexed, stored. file allows running scripts found on the filesystem in /etc/elasticsearch/scripts (rpm or deb) or config/scripts (zip or tar).
 
# By default all script contexts are allowed to be executed. Scripting can be enabled or disabled in different contexts in the Elasticsearch API. The supported contexts are none, aggs, search, update, [plugin-name].
script.allowed_contexts: search, ingest, update, xpack_watch

indices.recovery.max_bytes_per_sec: 200mb  # Defaults to 40mb.

action.destructive_requires_name: true  # to disable allowing to delete indices via wildcards * or _all

cluster.routing.allocation.node_initial_primaries_recoveries: 2  # recovery of an unassigned primary after node restart uses data from the local disk. These should be fast so more initial primary recoveries can happen in parallel on the same node.
cluster.routing.allocation.same_shard.host: true  # Allows performing a check to prevent allocation of multiple instances of the same shard on a single host, based on host name and host address. Defaults to false, meaning that no check is performed by default. This setting only applies if multiple nodes are started on the same machine.

xpack.security.enabled: true  # Set to false to disable X-Pack security
xpack.security.dls_fls.enabled: false  # Defaults to true. Set to false to prevent document and field level security from being configured.
xpack.security.audit.enabled: true  # Enable auditing to keep track of attempted and successful interactions with your Elasticsearch cluster
xpack.security.http.ssl.enabled: true # Enable SSL on the HTTP layer to ensure that communication between HTTP clients and the cluster is encrypted
xpack.security.transport.ssl.enabled: true # Enable SSL on the transport networking layer to ensure that communication between nodes is encrypted.

xpack.ssl.key: /work/elk/elasticsearch-5.6.2/config/x-pack/master_01.key  # The full path to the node key file.
xpack.ssl.certificate: /work/elk/elasticsearch-5.6.2/config/x-pack/master_01.crt # The full path to the node certificate
xpack.ssl.certificate_authorities: [ "/work/elk/elasticsearch-5.6.2/config/x-pack/ca.crt" ] # An array of paths to the CA certificates that should be trusted.
 
xpack.monitoring.enabled: true  # Set to false to disable X-Pack monitoring.
 
xpack.graph.enabled: true  # Set to false to disable X-Pack graph.

Settings for Data + ML node

# Rest settings same as master node
node.master: false           # Disable the node.master role (enabled by default).
node.data: true              # Enable the node.data role (enabled by default).
node.ingest: false           # Disable the node.ingest role (enabled by default).
search.remote.connect: false # Disable cross-cluster search (enabled by default).
node.ml: true                # Enable the node.ml role (enabled by default in X-Pack).
xpack.ml.enabled: true       # The xpack.ml.enabled setting is enabled by default in X-Pack.

Settings for Coordinating node

# Rest settings same as master node
node.master: false           # Disable the node.master role (enabled by default).
node.data: false             # Enable the node.data role (enabled by default).
node.ingest: false           # Disable the node.ingest role (enabled by default).
search.remote.connect: false # Disable cross-cluster search (enabled by default).
node.ml: false               # Disable the node.ml role (enabled by default in X-Pack).
xpack.ml.enabled: false      # The xpack.ml.enabled setting is enabled by default in X-Pack.
  
## Added CORS Support
http.cors.enabled : true
http.cors.allow-origin : "*"
# http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE  # commented out, as already default
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.max_content_length : 500mb # Defaults to 100mb

Settings for elasticsearch log4j2.properties

logger.action.level = warn
   
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log.gz
   
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfLastModified
appender.rolling.strategy.action.condition.age = 30D
appender.rolling.strategy.action.PathConditions.type = IfFileName
appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}-*
  
rootLogger.level = warn
  
logger.index_search_slowlog_rolling.level = warn
   
logger.index_indexing_slowlog.level = warn

Setting default system users and license (on any one of the nodes only)

Note: The default password for the elastic user is changeme.
./bin/elasticsearch &
# Install ElasticSearch license
curl -XPUT -u elastic 'http://localhost:9201/_xpack/license' -H "Content-Type: application/json" -d @elasticsearch-non-prod-v5.json
# Change the elastic user's password
curl -XPUT -u elastic 'localhost:9200/_xpack/security/user/elastic/_password' -H "Content-Type: application/json" -d '{"password" : "elasticpassword"}'
# Change the kibana user's password, which is used to connect by Kibana
curl -XPUT -u elastic 'localhost:9200/_xpack/security/user/kibana/_password' -H "Content-Type: application/json" -d '{"password" : "kibanapassword"}'
# Change the logstash_system user's password, which is used to collect monitoring logs from logstash
curl -XPUT -u elastic 'localhost:9200/_xpack/security/user/logstash_system/_password' -H "Content-Type: application/json" -d '{"password" : "logstash_system_password"}'
# Create a role to manage index templates, create indices, and write (includes index, update, and delete documents) documents in the indices it creates
curl -XPOST -u elastic 'localhost:9200/_xpack/security/role/logstash_writer' -H "Content-Type: application/json" -d '
{
  "cluster": ["manage_index_templates", "monitor"],
  "indices": [
    {
      "names": [ "*" ], 
      "privileges": ["write","create_index"]
    }
  ]
}'
# Create a logstash_internal user and assign it the logstash_writer role
curl -XPOST -u elastic 'localhost:9200/_xpack/security/user/logstash_internal' -H "Content-Type: application/json" -d '
{
  "password" : "logstash_writer_password",
  "roles" : [ "logstash_writer"],
  "full_name" : "Internal Logstash User"
}'
# Create a role to access the indices Logstash creates, users need the read and view_index_metadata privileges
curl -XPOST -u elastic 'localhost:9200/_xpack/security/role/logstash_reader' -H "Content-Type: application/json" -d '
{
  "indices": [
    {
      "names": [ "*" ], 
      "privileges": ["read","view_index_metadata"]
    }
  ]
}'
# Assign your Logstash users the logstash_reader role
curl -XPOST -u elastic 'localhost:9200/_xpack/security/user/logstash_user' -H "Content-Type: application/json" -d '
{
  "password" : "logstash_reader_password",
  "roles" : [ "logstash_reader"],
  "full_name" : "Kibana User"
}'

Kibana


Installation

NOTE: Use below command to check existing instances of Kibana, as its process description might not return a process, when searched using 'kibana'
netstat -lntp | grep 5601

mkdir -p /work/elk/data/data-kibana562/
mkdir -p /work/elk/data/logs-kibana562/

cd /work/elk/
curl -O https://artifacts.elastic.co/downloads/kibana/kibana-5.6.2-linux-x86_64.tar.gz
curl -O https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.6.2.zip
tar -zxvf kibana-5.6.2-linux-x86_64.tar.gz
cd /work/elk/kibana-5.6.2-linux-x86_64/
./bin/kibana-plugin install file:///work/elk/x-pack-5.6.2.zip

Configuration

ElasticSearch suggests to install Kibana on same nodes as a coordinating only node, which then acts as a load balancer.

kibana.yml

server.port: 5601
server.host: 0.0.0.0
server.name: ${hostname}

elasticsearch.url: "https://coord_01:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "kibanapassword"

logging.dest: /work/elk/data/logs-kibana562/kibana_debug.log
path.data: /work/elk/data/data-kibana562/

server.ssl.enabled: true
server.ssl.key: /work/elk/elasticsearch-5.6.2/config/x-pack/coord_01.key
server.ssl.certificate: /work/elk/elasticsearch-5.6.2/config/x-pack/coord_01.crt
elasticsearch.ssl.certificateAuthorities: [ "/work/elk/elasticsearch-5.6.2/config/x-pack/ca.crt" ]
  
# encryptionKey - optional random 32-character hex string, that is generated by Kibana, if not provided. Here it's of 40-char:
# xpack.security.encryptionKey: "e386d5f380dd962614538ad70d7e9745760f7e8e"
# xpack.reporting.encryptionKey: "e386d5f380dd962614538ad70d7e9745760f7e8e"

Settings for log4j2.properties

appender.audit_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_access-%d{yyyy-MM-dd}.log.gz

logger.xpack_security_audit_logfile.level = warn

Logstash

Installation

mkdir -p /work/elk/data/data-logstash562/
mkdir -p /work/elk/data/logs-logstash562/
cd /work/elk/
curl -O https://artifacts.elastic.co/downloads/logstash/logstash-5.6.2.tar.gz
curl -O https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.6.2.zip
tar -zxvf logstash-5.6.2.tar.gz
cd /work/elk/logstash-5.6.2/
./bin/logstash-plugin install file:///work/elk/logstash-5.6.2/x-pack-5.6.2.zip

Configuration


logstash.yml

path.data: "/work/elk/data/data-logstash562/"

log.level: warn

path.logs: "/work/elk/data/logs-logstash562/"

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: ["https://coord_01:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "logstash_system_password"
xpack.monitoring.elasticsearch.ssl.ca: "/work/elk/logstash-5.6.2/config/x-pack/ca.crt"

log4j2.properties

appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.log.gz
appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.log.gz
appender.rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}.log.gz
appender.json_rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}.log.gz

Comments

Popular posts from this blog

MPlayer subtitle font problem in Windows

While playing a video with subtitles in mplayer, I was getting the following problem: New_Face failed. Maybe the font path is wrong. Please supply the text font file (~/.mplayer/subfont.ttf). Solution is as follows: Right click on "My Computer". Select "Properties". Go to "Advanced" tab. Click on "Environment Variables". Delete "HOME" variable from User / System variables.

wget and curl behind corporate proxy throws certificate is not trusted or certificate doesn't have a known issuer

If you try to run wget or curl in Ununtu/Debian behind corporate proxy, you might receive errors like: ERROR: The certificate of 'apertium.projectjj.com' is not trusted. ERROR: The certificate of 'apertium.projectjj.com' doesn't have a known issuer. wget https://apertium.projectjj.com/apt/apertium-packaging.public.gpg ERROR: cannot verify apertium.projectjj.com's certificate, issued by 'emailAddress=proxyteam@corporate.proxy.com,CN=diassl.corporate.proxy.com,OU=Division UK,O=Group name,L=Company,ST=GB,C=UK': Unable to locally verify the issuer's authority. To connect to apertium.projectjj.com insecurely, use `--no-check-certificate'. To solution is to install your company's CA certificate in Ubuntu. In Windows, open the first part of URL in your web browser. e.g. open https://apertium.projectjj.com in web browser. If you inspect the certifcate, you will see the same CN (diassl.corporate.proxy.com), as reported by the error above ...

Kafka performance tuning

Performance Tuning of Kafka is critical when your cluster grow in size. Below are few points to consider to improve Kafka performance: Consumer group ID : Never use same exact consumer group ID for dozens of machines consuming from different topics. All of those commits will end up on the same exact partition of __consumer_offsets , hence the same broker, and this might in turn cause performance problems. Choose the consumer group ID to group_id+topic_name . Skewed : A broker is skewed if its number of partitions is greater that the average of partitions per broker on the given topic. Example: 2 brokers share 4 partitions, if one of them has 3 partitions, it is skewed (3 > 2). Try to make sure that none of the brokers is skewed. Spread : Brokers spread is the percentage of brokers in the cluster that has partitions for the given topic. Example: 3 brokers share a topic that has 2 partitions, so 66% of the brokers have partitions for this topic. Try to achieve 100% broker spread...