Kibana shows documents from different indices in one index, why?

Hello! ELK stack is 7.15 and i have one problem, we have a pipeline in logstash and two different configs with different ports 5000 and 6000 in input section, then filter where we set a field for cassandra logs: [fields][log_type] =="cassandra_prod" and [fields][log_type] =="cassandra_beta" for another config and output section where we set different indicies: index => "cassandra-prod-%{+YYYY.MM}" and index => "cassandra-beta-%{+YYYY.MM}", this indicies created successfully and when we created an index pattern - i saw that only one index (correct index for me) adapt for this condition, but when i see my pattern in the discover tab, i see the documents from cassandra-beta and cassandra-prod, and for another one we have the same situation, please help!

Hi @My_Google_Account

What is the index pattern that you have defined in Kibana for the index? You mention "the correct index for me", which one is it?

Thanks
Sébastien

pattern cassandra-beta-* for index: cassandra-beta-2021.11

pattern cassandra-prod-* for index: cassandra-prod-2021.11

This is strange indeed. And you confirm that, in Discover, when you expand the documents you can see both indices under the _index ?

Can you share both of your logstash pipelines? Use the preformatted text button (</>) to share it.

no, not quite, there is an another strange thing, in every index, no matter prod or beta (in discover tab) i see the documents from production and beta servers

config for beta

input {
    beats {
        port => 5000
            codec => plain
    }
}

filter {
	if [fields][log_type] =="cassandra_beta" {
	        grok {
            match => { "message" => "%{LOGLEVEL:loglevel}%{SPACE}\[%{WORD:executorName}(.-?)?(%{POSINT:executorLine})?\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{DATA:file_name}\:%{NUMBER:file_line} - %{GREEDYDATA:message_data}"}
		}
    }
    mutate {
      remove_field => [ "host" ]
    }
}

output {
    elasticsearch {
        hosts => "server_ip:9200"
        user => "elastic"
        password => "password"
		index => "cassandra-beta-%{+YYYY.MM}"
		template => "/etc/logstash/templates_for_elk/cassandra-beta.json"
        template_name => "cassandra-beta-*"
        template_overwrite => "true"
      }
}

config for prod

input {
    beats {
        port => 6000
            codec => plain
    }
}

filter {
	if [fields][log_type] =="cassandra_prod" {
	        grok {
            match => { "message" => "%{LOGLEVEL:loglevel}%{SPACE}\[%{WORD:executorName}(.-?)?(%{POSINT:executorLine})?\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{DATA:file_name}\:%{NUMBER:file_line} - %{GREEDYDATA:message_data}"}
        }
    }
    mutate {
      remove_field => [ "host" ]
    }
}

output {
    elasticsearch {
        hosts => "server_ip:9200"
        user => "elastic"
        password => "password"
		index => "cassandra-prod-%{+YYYY.MM}"
		template => "/etc/logstash/templates_for_elk/cassandra-prod.json"
        template_name => "cassandra-prod-*"
        template_overwrite => "true"
      }
}

How are you running logstash?

You do not have any filter in the output block, depending on how you are running logstash your pipelines won't be separated.

If you are running logstash using the -f option and pointing to the directory where you have the two config files, you will have just one pipeline that will be the merge of the two configurations, so you will have the prod and beta events stored in both index, as there is no filter in the output block.

If you are running it as a service, it is using the pipelines.yml, but if you did not change the content of the pipelines.yml it will point to /etc/logstash/conf.d/*.conf and will load all the files in this directory as one pipeline, with the same behaviour as the example above.

To run two separated pipelines you need to configure the pipelines.yml to have one pipeline for every file.

Something like this:

- pipeline.id: "beta"
  path.config: "/etc/logstash/conf.d/beta.conf"

- pipeline.id: "prod"
  path.config: "/etc/logstash/conf.d/prod.conf"

Thanks so much, i separated the pipelines and it's worked!!!! have a nice day!!!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.