Skip to main content

A sample Logstash config to connect to ElasticSearch with TLS

Following up my previous blog post below is a sample Logstash config that can be used to connect to the ElasticSearch for the output of data:
cd /work/elk/logstash-5.2.6/
vim ./config/twitter_feeds_consumer/twitter_feeds_consumer.conf

input {
  kafka {
    topics => ["twitter_feeds_kafka_topic_name"]
    bootstrap_servers => "kafka-broker-1.domain.name:9092,kafka-broker-2.domain.name:9092"
    # consumer_threads => 5
    # auto_offset_reset => "earliest"
    group_id => "logstash562_twitter_feeds_consumer_group"
    codec => json { charset => "ISO-8859-1" }
  }
}

output {
# stdout { codec => "rubydebug" }

  elasticsearch {
    hosts => ["https://coord_01:9200"]
    index => "index-name-%{+YYYY.MM.dd}"
    ssl => true
    cacert => '/work/elk/logstash-5.6.2/config/ca.crt'
    user => logstash_internal
    password => logstash_internal_password
  }
}

Logstash 5.x onwards require that every logstash process specifies its own data path folder.
To do that follow below steps:
mkdir -p /work/elk/data/data-logstash562/twitter_feeds_consumer
mkdir -p /work/elk/data/logs-logstash562/twitter_feeds_consumer/
./bin/logstash -f ./config/twitter_feeds_consumer/twitter_feeds_consumer.conf -w 5 --path.data=/work/elk/data/data-logstash562/twitter_feeds_consumer -l /work/elk/data/logs-logstash562/twitter_feeds_consumer &

Important flags used are:
-f Logstash config file
-w Sets the number of pipeline workers that will, in parallel, execute the filter and output stages of the pipeline.
--path.data Logstash 5.x onwards, you need to specify different data folders for every Logstash process.
-t Check configuration for valid syntax and then exit
-r Reload config automatically
--config.reload.interval RELOAD_INTERVAL How frequently to poll the configuration location for changes, in seconds. The default is every 3 seconds.

Comments

Popular posts from this blog

MPlayer subtitle font problem in Windows

While playing a video with subtitles in mplayer, I was getting the following problem:
New_Face failed. Maybe the font path is wrong. Please supply the text font file (~/.mplayer/subfont.ttf).
Solution is as follows:
Right click on "My Computer".Select "Properties".Go to "Advanced" tab.Click on "Environment Variables".Delete "HOME" variable from User / System variables.

Procedure for name and date of birth change (Pune)

For change of name, the form (scribd) is available free of cost at Government Book Depot (Shaskiya Granthagar), which is located near Collector’s office, next to Saint Helena's School. The postal address is:
Government Photozinco Press Premises and Book Depot,
5, Photozinco Press Road, Pune, MH, 411001.
Wikimapia link

Charges for name or date of birth change, in the Maharashtra Government Gazette:
INR 120.00 per insertion (for two copies of the Gazette)
For backward class applicants: INR 60.00
Charges for extra copy of the Gazette: INR 15.00 per copy (two copies are enough, so you may not want to pay extra for extra copies).

Backward class applicants are required to submit a xerox of caste certificate of old name as issued by the Collector of the District concerned.

Once the form is duly submitted, it normally takes 10 to 15 days for publication of advertisement in the Maharashtra Government Gazette. The Gazette copy reaches to the address filled in the form within next 7 to 15 day…

Setting up ELK 5.x (ElasticSearch, Logstash, Kibana) cluster

We recently upgraded from ElasticSearch 2.4.3 to 5.6.2.
Below are the steps that we used to install and configure new ELK cluster.

ElasticSearchInstallationmkdir -p /work/elk/data/data-es562 mkdir -p /work/elk/data/logs-es562 mkdir -p /work/elk/data/repo-es562 cd /work/elk/ curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.2.tar.gz tar -zxvf elasticsearch-5.6.2.tar.gz curl -O https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.6.2.zip cd /work/elk/elasticsearch-5.6.2 ./bin/elasticsearch-plugin install file:///work/elk/x-pack-5.6.2.zip
Configuration
Settings for Master + Ingest node in elasticsearch.ymlWe have reused master nodes as ingest nodes, because we don't have any heavy ingest pipelines, and x-pack monitoring requires at-least one ingest node to be present in the cluster.
cluster.name: ESDev562 node.name: "master_01" node.master: true # Enable the node.master role (enabled by default). node.data: false # D…