Skip to main content

Setting up ELK 5.x (ElasticSearch, Logstash, Kibana) cluster

We recently upgraded from ElasticSearch 2.4.3 to 5.6.2.
Below are the steps that we used to install and configure new ELK cluster.

ElasticSearch

Installation

mkdir -p /work/elk/data/data-es562
mkdir -p /work/elk/data/logs-es562
mkdir -p /work/elk/data/repo-es562
cd /work/elk/
curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.2.tar.gz
tar -zxvf elasticsearch-5.6.2.tar.gz
curl -O https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.6.2.zip
cd /work/elk/elasticsearch-5.6.2
./bin/elasticsearch-plugin install file:///work/elk/x-pack-5.6.2.zip

Configuration


Settings for Master + Ingest node in elasticsearch.yml

We have reused master nodes as ingest nodes, because we don't have any heavy ingest pipelines, and x-pack monitoring requires at-least one ingest node to be present in the cluster.
cluster.name: ESDev562

node.name: "master_01"
node.master: true            # Enable the node.master role (enabled by default).
node.data: false             # Disable the node.data role (enabled by default).
node.ingest: true            # Enable the node.ingest role (enabled by default).
search.remote.connect: false # Disable cross-cluster search (enabled by default).
node.ml: false               # Disable the node.ml role (enabled by default in X-Pack).
xpack.ml.enabled: false      # The xpack.ml.enabled setting is enabled by default in X-Pack.
 
path.data: /work/elk/data/data-es562
path.logs: /work/elk/data/logs-es562
path.repo: /work/elk/data/repo-es562
 
network.host: 0.0.0.0
 
http.port: 9200
transport.tcp.port: 9300
 
discovery.zen.ping.unicast.hosts: ["master_01:9300", "master_02:9300", "master_03:9300"]
 
discovery.zen.minimum_master_nodes: 2
 
gateway.recover_after_nodes: 3
  
# By default all script types are allowed to be executed. The supported types are none, inline, file, indexed, stored
script.allowed_types: inline, file  # disables indexed, stored. file allows running scripts found on the filesystem in /etc/elasticsearch/scripts (rpm or deb) or config/scripts (zip or tar).
 
# By default all script contexts are allowed to be executed. Scripting can be enabled or disabled in different contexts in the Elasticsearch API. The supported contexts are none, aggs, search, update, [plugin-name].
script.allowed_contexts: search, ingest, update, xpack_watch

indices.recovery.max_bytes_per_sec: 200mb  # Defaults to 40mb.

action.destructive_requires_name: true  # to disable allowing to delete indices via wildcards * or _all

cluster.routing.allocation.node_initial_primaries_recoveries: 2  # recovery of an unassigned primary after node restart uses data from the local disk. These should be fast so more initial primary recoveries can happen in parallel on the same node.
cluster.routing.allocation.same_shard.host: true  # Allows performing a check to prevent allocation of multiple instances of the same shard on a single host, based on host name and host address. Defaults to false, meaning that no check is performed by default. This setting only applies if multiple nodes are started on the same machine.

xpack.security.enabled: true  # Set to false to disable X-Pack security
xpack.security.dls_fls.enabled: false  # Defaults to true. Set to false to prevent document and field level security from being configured.
xpack.security.audit.enabled: true  # Enable auditing to keep track of attempted and successful interactions with your Elasticsearch cluster
xpack.security.http.ssl.enabled: true # Enable SSL on the HTTP layer to ensure that communication between HTTP clients and the cluster is encrypted
xpack.security.transport.ssl.enabled: true # Enable SSL on the transport networking layer to ensure that communication between nodes is encrypted.

xpack.ssl.key: /work/elk/elasticsearch-5.6.2/config/x-pack/master_01.key  # The full path to the node key file.
xpack.ssl.certificate: /work/elk/elasticsearch-5.6.2/config/x-pack/master_01.crt # The full path to the node certificate
xpack.ssl.certificate_authorities: [ "/work/elk/elasticsearch-5.6.2/config/x-pack/ca.crt" ] # An array of paths to the CA certificates that should be trusted.
 
xpack.monitoring.enabled: true  # Set to false to disable X-Pack monitoring.
 
xpack.graph.enabled: true  # Set to false to disable X-Pack graph.

Settings for Data + ML node

# Rest settings same as master node
node.master: false           # Disable the node.master role (enabled by default).
node.data: true              # Enable the node.data role (enabled by default).
node.ingest: false           # Disable the node.ingest role (enabled by default).
search.remote.connect: false # Disable cross-cluster search (enabled by default).
node.ml: true                # Enable the node.ml role (enabled by default in X-Pack).
xpack.ml.enabled: true       # The xpack.ml.enabled setting is enabled by default in X-Pack.

Settings for Coordinating node

# Rest settings same as master node
node.master: false           # Disable the node.master role (enabled by default).
node.data: false             # Enable the node.data role (enabled by default).
node.ingest: false           # Disable the node.ingest role (enabled by default).
search.remote.connect: false # Disable cross-cluster search (enabled by default).
node.ml: false               # Disable the node.ml role (enabled by default in X-Pack).
xpack.ml.enabled: false      # The xpack.ml.enabled setting is enabled by default in X-Pack.
  
## Added CORS Support
http.cors.enabled : true
http.cors.allow-origin : "*"
# http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE  # commented out, as already default
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.max_content_length : 500mb # Defaults to 100mb

Settings for elasticsearch log4j2.properties

logger.action.level = warn
   
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log.gz
   
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfLastModified
appender.rolling.strategy.action.condition.age = 30D
appender.rolling.strategy.action.PathConditions.type = IfFileName
appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}-*
  
rootLogger.level = warn
  
logger.index_search_slowlog_rolling.level = warn
   
logger.index_indexing_slowlog.level = warn

Setting default system users and license (on any one of the nodes only)

Note: The default password for the elastic user is changeme.
./bin/elasticsearch &
# Install ElasticSearch license
curl -XPUT -u elastic 'http://localhost:9201/_xpack/license' -H "Content-Type: application/json" -d @elasticsearch-non-prod-v5.json
# Change the elastic user's password
curl -XPUT -u elastic 'localhost:9200/_xpack/security/user/elastic/_password' -H "Content-Type: application/json" -d '{"password" : "elasticpassword"}'
# Change the kibana user's password, which is used to connect by Kibana
curl -XPUT -u elastic 'localhost:9200/_xpack/security/user/kibana/_password' -H "Content-Type: application/json" -d '{"password" : "kibanapassword"}'
# Change the logstash_system user's password, which is used to collect monitoring logs from logstash
curl -XPUT -u elastic 'localhost:9200/_xpack/security/user/logstash_system/_password' -H "Content-Type: application/json" -d '{"password" : "logstash_system_password"}'
# Create a role to manage index templates, create indices, and write (includes index, update, and delete documents) documents in the indices it creates
curl -XPOST -u elastic 'localhost:9200/_xpack/security/role/logstash_writer' -H "Content-Type: application/json" -d '
{
  "cluster": ["manage_index_templates", "monitor"],
  "indices": [
    {
      "names": [ "*" ], 
      "privileges": ["write","create_index"]
    }
  ]
}'
# Create a logstash_internal user and assign it the logstash_writer role
curl -XPOST -u elastic 'localhost:9200/_xpack/security/user/logstash_internal' -H "Content-Type: application/json" -d '
{
  "password" : "logstash_writer_password",
  "roles" : [ "logstash_writer"],
  "full_name" : "Internal Logstash User"
}'
# Create a role to access the indices Logstash creates, users need the read and view_index_metadata privileges
curl -XPOST -u elastic 'localhost:9200/_xpack/security/role/logstash_reader' -H "Content-Type: application/json" -d '
{
  "indices": [
    {
      "names": [ "*" ], 
      "privileges": ["read","view_index_metadata"]
    }
  ]
}'
# Assign your Logstash users the logstash_reader role
curl -XPOST -u elastic 'localhost:9200/_xpack/security/user/logstash_user' -H "Content-Type: application/json" -d '
{
  "password" : "logstash_reader_password",
  "roles" : [ "logstash_reader"],
  "full_name" : "Kibana User"
}'

Kibana


Installation

NOTE: Use below command to check existing instances of Kibana, as its process description might not return a process, when searched using 'kibana'
netstat -lntp | grep 5601

mkdir -p /work/elk/data/data-kibana562/
mkdir -p /work/elk/data/logs-kibana562/

cd /work/elk/
curl -O https://artifacts.elastic.co/downloads/kibana/kibana-5.6.2-linux-x86_64.tar.gz
curl -O https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.6.2.zip
tar -zxvf kibana-5.6.2-linux-x86_64.tar.gz
cd /work/elk/kibana-5.6.2-linux-x86_64/
./bin/kibana-plugin install file:///work/elk/x-pack-5.6.2.zip

Configuration

ElasticSearch suggests to install Kibana on same nodes as a coordinating only node, which then acts as a load balancer.

kibana.yml

server.port: 5601
server.host: 0.0.0.0
server.name: ${hostname}

elasticsearch.url: "https://coord_01:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "kibanapassword"

logging.dest: /work/elk/data/logs-kibana562/kibana_debug.log
path.data: /work/elk/data/data-kibana562/

server.ssl.enabled: true
server.ssl.key: /work/elk/elasticsearch-5.6.2/config/x-pack/coord_01.key
server.ssl.certificate: /work/elk/elasticsearch-5.6.2/config/x-pack/coord_01.crt
elasticsearch.ssl.certificateAuthorities: [ "/work/elk/elasticsearch-5.6.2/config/x-pack/ca.crt" ]
  
# encryptionKey - optional random 32-character hex string, that is generated by Kibana, if not provided. Here it's of 40-char:
# xpack.security.encryptionKey: "e386d5f380dd962614538ad70d7e9745760f7e8e"
# xpack.reporting.encryptionKey: "e386d5f380dd962614538ad70d7e9745760f7e8e"

Settings for log4j2.properties

appender.audit_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_access-%d{yyyy-MM-dd}.log.gz

logger.xpack_security_audit_logfile.level = warn

Logstash

Installation

mkdir -p /work/elk/data/data-logstash562/
mkdir -p /work/elk/data/logs-logstash562/
cd /work/elk/
curl -O https://artifacts.elastic.co/downloads/logstash/logstash-5.6.2.tar.gz
curl -O https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.6.2.zip
tar -zxvf logstash-5.6.2.tar.gz
cd /work/elk/logstash-5.6.2/
./bin/logstash-plugin install file:///work/elk/logstash-5.6.2/x-pack-5.6.2.zip

Configuration


logstash.yml

path.data: "/work/elk/data/data-logstash562/"

log.level: warn

path.logs: "/work/elk/data/logs-logstash562/"

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: ["https://coord_01:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "logstash_system_password"
xpack.monitoring.elasticsearch.ssl.ca: "/work/elk/logstash-5.6.2/config/x-pack/ca.crt"

log4j2.properties

appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.log.gz
appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.log.gz
appender.rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}.log.gz
appender.json_rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}.log.gz

Comments

Popular posts from this blog

ElasticSearch max file descriptors too low error

ElasticSearch 5.x requires a minimum of Max file descriptors 65536 and Max virtual memory areas 262144.
It throws an error on start-up if these are set to very low value.
ERROR: bootstrap checks failed max file descriptors [16384] for elasticsearch process is too low, increase to at least [65536] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Check current values using:
$ cat /proc/sys/fs/file-max 16384 $ cat /proc/sys/vm/max_map_count 65530 $ ulimit -Hn 16384 $ ulimit -Sn 4096
To fix this, following files need to change/add below settings:
Recommended: Add a new file 99-elastic.conf under /etc/security/limits.d with following settings:
elasticsearch - nofile 800000 elasticsearch - nproc 16384 defaultusername - nofile 800000 defaultusername - nproc 16384 Alternatively, edit /etc/sysctl.conf with following settings:
fs.file-max = 800000 vm.max_map_count=300000

Procedure for name and date of birth change (Pune)

For change of name, the form (scribd) is available free of cost at Government Book Depot (Shaskiya Granthagar), which is located near Collector’s office, next to Saint Helena's School. The postal address is:
Government Photozinco Press Premises and Book Depot,
5, Photozinco Press Road, Pune, MH, 411001.
Wikimapia link

Charges for name or date of birth change, in the Maharashtra Government Gazette:
INR 120.00 per insertion (for two copies of the Gazette)
For backward class applicants: INR 60.00
Charges for extra copy of the Gazette: INR 15.00 per copy (two copies are enough, so you may not want to pay extra for extra copies).

Backward class applicants are required to submit a xerox of caste certificate of old name as issued by the Collector of the District concerned.

Once the form is duly submitted, it normally takes 10 to 15 days for publication of advertisement in the Maharashtra Government Gazette. The Gazette copy reaches to the address filled in the form within next 7 to 15 day…

ElasticSearch Curator

Curator is a tool from Elastic to help manage your ElasticSearch cluster.
For certain logs/data, we use one ElasticSearch index per year/month/day and might keep a rolling 7 day window of history.
This means that every day we need to create, backup, and delete some indices.
Curator helps make this process automated and repeatable.

InstallationCurator is written in Python, so will need pip to install it:
pip install elasticsearch-curator curator --config ./curator_cluster_config.yml curator_actions.yml --dry-run
ConfigurationCreate a file curator_cluster_config.yml with following contents:
--- # Remember, leave a key empty if there is no value. None will be a string, not a Python "NoneType" client: hosts: - "es_coordinating_01.singhaiuklimited.com" port: 9200 url_prefix: use_ssl: True # The certificate file is the CA certificate used to sign all ES node certificates. # Use same CA certificate to generate and sign the certificate running curator (specif…