Following up my previous blog post below is a sample Logstash config that can be used to connect to the ElasticSearch for the output of data:
Logstash 5.x onwards require that every logstash process specifies its own data path folder.
To do that follow below steps:
Important flags used are:
cd /work/elk/logstash-5.2.6/ vim ./config/twitter_feeds_consumer/twitter_feeds_consumer.conf
input { kafka { topics => ["twitter_feeds_kafka_topic_name"] bootstrap_servers => "kafka-broker-1.domain.name:9092,kafka-broker-2.domain.name:9092" # consumer_threads => 5 # auto_offset_reset => "earliest" group_id => "logstash562_twitter_feeds_consumer_group" codec => json { charset => "ISO-8859-1" } } } output { # stdout { codec => "rubydebug" } elasticsearch { hosts => ["https://coord_01:9200"] index => "index-name-%{+YYYY.MM.dd}" ssl => true cacert => '/work/elk/logstash-5.6.2/config/ca.crt' user => logstash_internal password => logstash_internal_password } }
Logstash 5.x onwards require that every logstash process specifies its own data path folder.
To do that follow below steps:
mkdir -p /work/elk/data/data-logstash562/twitter_feeds_consumer mkdir -p /work/elk/data/logs-logstash562/twitter_feeds_consumer/ ./bin/logstash -f ./config/twitter_feeds_consumer/twitter_feeds_consumer.conf -w 5 --path.data=/work/elk/data/data-logstash562/twitter_feeds_consumer -l /work/elk/data/logs-logstash562/twitter_feeds_consumer &
Important flags used are:
-f Logstash config file -w Sets the number of pipeline workers that will, in parallel, execute the filter and output stages of the pipeline. --path.data Logstash 5.x onwards, you need to specify different data folders for every Logstash process. -t Check configuration for valid syntax and then exit -r Reload config automatically --config.reload.interval RELOAD_INTERVAL How frequently to poll the configuration location for changes, in seconds. The default is every 3 seconds.
Comments