Use filebeat to pass the json file to logstash.
Read the CSV file using logstash.
Both are different indexes.
But the two indexes on kibana are reading the same csv file. How to solve it ?
When you put more than one config file in the directory, the files are concatenated and not treated as separate pipelines. Data from all inputs will therefore go to all outputs. You can get around this by using conditionals or the multi-pipeline feature. This is a common misunderstanding, so you should be able to find plenty of examples if you search this forum.
Is the pipelines.yml file in the correct location so it is being picked up? Is there any chance it is still picking up the config from the old location?
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-12-19T10:24:53,713][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2018-12-19T10:24:53,728][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2018-12-19T10:24:54,197][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-12-19T10:24:54,213][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.0"}
[2018-12-19T10:24:54,245][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"cb3e0911-1bbc-4931-9c72-8ef208d4edb4", :path=>"/usr/share/logstash/data/uuid"}
[2018-12-19T10:24:58,254][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"metricbeat-%{+YYYY.MM.dd}", manage_template=>false, id=>"fe38e84cf8a941fbf650bf7af553dcabeae46374ccb6d0668bec69b2cef3468b", hosts=>[//elasticsearch:9200], document_type=>"metricbeat-system", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1b04fb65-55fe-4b38-a30e-d8e528e940fc", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-12-19T10:25:00,843][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-12-19T10:25:01,564][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2018-12-19T10:25:01,577][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2018-12-19T10:25:01,936][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2018-12-19T10:25:02,007][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-12-19T10:25:02,012][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-12-19T10:25:02,035][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2018-12-19T10:25:02,070][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2018-12-19T10:25:02,072][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2018-12-19T10:25:02,079][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-12-19T10:25:02,087][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2018-12-19T10:25:02,100][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-12-19T10:25:02,101][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-12-19T10:25:02,116][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2018-12-19T10:25:02,126][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-12-19T10:25:02,128][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.