Different files are entered into logstash . kibana fields show the same

Use filebeat to pass the json file to logstash.
Read the CSV file using logstash.
Both are different indexes.
But the two indexes on kibana are reading the same csv file. How to solve it ?

json

filebeat

filebeat.config:
  inputs:
    enabled: true
    path: /usr/share/filebeat/*.yml
    reload.enabled: true
    reload.period: 10s
  modules:
    enabled: true


filebeat.inputs:
- input_type: log
  paths:
    - "/usr/share/filebeat/DataSet/attack-trace.json"
  document_type: "pcap_file"
  json.keys_under_root: true
  json.add_error_key: true

output.logstash:
  enable: true
  hosts: "logstash:5555"
  ssl.enabled: false
  loadbalance: false

setup.kibana:
  enabled: true
  host: "kibana:5601"
  ssl.enabled: false

logstash

input {
        beats {
                port => 5555
                tags => "pcap"
                #client_inactivity_timeout => "1200"
                codec => "json"
        }
}
filter {

}

output {
        elasticsearch {
                hosts => "elasticsearch:9200"
                index => "filebeat-pacp-%{+YYYY.MM.dd}"
        }
}

CSV

logstash

input {
        file {
                path => ["/usr/share/logstash/DataSet/TBrain_IPS.csv"]
                start_position => "beginning"
                sincedb_path => "/dev/null"
        }
}
filter {
        csv {
                separator => ","
                autodetect_column_names=> true
                skip_empty_columns=> false
                skip_empty_rows=> false
                skip_header=> false
                id => "ips"
        }
        mutate {
                split => { "event_rule_reference" => ";" }
        }
        #ruby {
        #       code => "event['event_rule_reference'] = event['event_rule_reference'].keys"

        #}

}
output {
        elasticsearch {
                hosts => "elasticsearch:9200"
                index => "ips"
        }
        stdout {
                 codec => rubydebug
        }
}

When you put more than one config file in the directory, the files are concatenated and not treated as separate pipelines. Data from all inputs will therefore go to all outputs. You can get around this by using conditionals or the multi-pipeline feature. This is a common misunderstanding, so you should be able to find plenty of examples if you search this forum.

Can you give an example? Thank you.

Did you not find anything when you searched the forum?

我使用 pipelines.yml 定義每個 pipeline 但是果還是一樣。

logstash .conf

logstash.conf  metricbeat.conf  pcap.conf  readCSV.conf

pipelines.yml

- pipeline.id: csv
  path.config: "/usr/share/logstash/pipeline/readCSV.conf"
- pipeline.id: pcap
  path.config: "/usr/share/logstash/pipeline/pcap.conf"
- pipeline.id: metricbeat
  path.config: "/usr/share/logstash/pipeline/metricbeat.conf"

Is the pipelines.yml file in the correct location so it is being picked up? Is there any chance it is still picking up the config from the old location?

It is in the right place.

bash-4.2$ cd config/
bash-4.2$ ls
jvm.options  log4j2.properties  logstash-sample.conf  logstash.yml  pipelines.yml  startup.options
bash-4.2$ cat pipelines.yml
- pipeline.id: csv
  path.config: "/usr/share/logstash/pipeline/readCSV.conf"
- pipeline.id: pcap
  path.config: "/usr/share/logstash/pipeline/pcap.conf"
- pipeline.id: metricbeat
  path.config: "/usr/share/logstash/pipeline/metricbeat.conf"


How do you start Logstash? What does the logs say?

Run logstash with Docker

log

Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-12-19T10:24:53,713][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2018-12-19T10:24:53,728][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2018-12-19T10:24:54,197][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-12-19T10:24:54,213][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.0"}
[2018-12-19T10:24:54,245][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"cb3e0911-1bbc-4931-9c72-8ef208d4edb4", :path=>"/usr/share/logstash/data/uuid"}
[2018-12-19T10:24:58,254][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"metricbeat-%{+YYYY.MM.dd}", manage_template=>false, id=>"fe38e84cf8a941fbf650bf7af553dcabeae46374ccb6d0668bec69b2cef3468b", hosts=>[//elasticsearch:9200], document_type=>"metricbeat-system", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1b04fb65-55fe-4b38-a30e-d8e528e940fc", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-12-19T10:25:00,843][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-12-19T10:25:01,564][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2018-12-19T10:25:01,577][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2018-12-19T10:25:01,936][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2018-12-19T10:25:02,007][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-12-19T10:25:02,012][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-12-19T10:25:02,035][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2018-12-19T10:25:02,070][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2018-12-19T10:25:02,072][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2018-12-19T10:25:02,079][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-12-19T10:25:02,087][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2018-12-19T10:25:02,100][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-12-19T10:25:02,101][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-12-19T10:25:02,116][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2018-12-19T10:25:02,126][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-12-19T10:25:02,128][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}

I have already solved it.
The reason is that logstash.yml has a path.confing which affects it. After it was annotated, everything went well.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.